Google encountered an error when trying to access this URL. We may have encountered a DNS error or timeout, for instance. Your server may have been down or busy when we tried to access the page. Possible URL unreachable errors include:
5xx error See RFC 2616 for a complete list of these status codes. Likely reasons for this error are an internal server error or a server busy error. If the server is busy, it may have returned an overloaded status to ask the Googlebot to crawl the site more slowly. In this case, we'll return again later to crawl additional pages.
DNS issue We couldn't communicate with the DNS server when we tried to access the page. This could be because your server is down, or there is an issue with the DNS routing to your domain. Make sure that your domain is resolving correctly and try again.
robots.txt file unreachable Before we crawled the pages of your site, we tried to check your robots.txt file to ensure we didn't crawl any pages that you had roboted out. However, your robots.txt file was unreachable. To make sure we didn't crawl any pages listed in that file, we postponed our crawl. When this happens, we return to your site later and crawl it once we can reach your robots.txt file. Note that this is different from a 404 response when looking for a robots.txt file. If we receive a 404, we assume that a robots.txt file does not exist and we continue the crawl.
Network unreachable We encountered a network error when we tried to access the page. This can happen when Googlebot encounters a time-out or other network related issue when requesting a file from your site, and thus is forced to abandon the request. This can be caused by one or more of the following:
  • Excessive page load times due to dynamic pages taking too long to respond
  • Excessive page load times due to a site's hosting server being down, overloaded, or misconfigured
  • The hosting server is blocking Google's web crawler
  • A DNS configuration issue.
Failed to connect A connection could not be established.
No response The server closed the connection before we could receive a response.
Truncated response The server closed the connection before we could receive a full response, and the body of the response appears to be truncated.
Connection refused The server refused the connection.
Truncated headers The server closed the connection before full headers were sent.


4u
mlm
xxx
! and $
! and free
$$
,000 and !! and $
///////////////
@mlm
@public
@savvy
100% satisfied
18+
absolute
accept credit cards
act now! don’t hesitate!
additional income
addresses on cd
adult s
adult web
adults only
advertisement
all natural
amazing
apply online
as seen on
auto email removal
avoid bankruptcy
be 18
be amazed
be your own boss
being a member
big bucks
bill 1618
billing address
billion dollars
brand new pager
bulk email
buy direct
buying judgments
cable converter
call free
call now
calling creditors
cancel at any time
cannot be combined with any other offer
can’t live without
cards accepted
cash bonus
cashcashcash
casino
cell phone cancer scam
cents on the dollar
check or money order
claims not to be selling anything
claims to be in accordance with some spam law
claims to be legal
claims you are a winner
claims you registered with some kind of partner
click below
click here link
click to remove
click to remove mailto
compare rates
compete for your business
confidentially on all orders
congratulations
consolidate debt and credit
copy accurately
copy dvds
credit bureaus
credit card offers
cures baldness
dear email
dear friend
dear somebody
different reply to
dig up dirt on friends
direct email
direct marketing
discusses search engine listings
do it today
don’t delete
drastically reduced
earn per week
easy terms
eliminate bad credit
email harvest
email marketing
expect to earn
extra income
fantastic deal
fast viagra delivery
financial freedom
find out anything
for free
for free!
for free?
for instant access
for just $ (some amt) free access
free cell phone
free consultation
free dvd
free grant money
free hosting
free installation
free investment
free leads
free membership
free money
free offer
free preview
free priority mail
free quote
free sample
free trial
free website
friend@
full refund
get it now
get paid
get started now
gift certificate
great offer
guarantee
guarantee and
have you been turned down?
hello@
hidden assets
home employment
human growth hormone
if only it were that easy
in accordance with laws
increase sales
increase traffic
insurance
investment decision
it’s effective
join millions of americans
laser printer
limited time only
long distance phone offer
lose weight spam
lower interest rates
lower monthly payment
lowest price
luxury car
mail in order form
mail@
marketing solutions
mass email
meet singles
member stuff
message contains disclaimer
mlm
money back
money back
money making
money-back guarantee
month trial offer
more info and visit and $
more internet traffic
mortgage rates
multi level marketing
must be 18
must be 21
name brand
new customers only
new domain extensions
nigerian
no age restrictions
no catch
no claim forms
no cost
no credit check
no disappointment
no experience
no fees
no gimmick
no inventory
no investment
no medical exams
no middleman
no obligation
no purchase necessary
no questions asked
no selling
no strings attached
not intended
off shore
offer expires
offers coupon
offers extra cash
offers free (often stolen) passwords
once in lifetime
one hundred percent free
one hundred percent guaranteed
one time mailing one-time mail
online biz opportunity
online pharmacy
only $
opportunity
opt in
order now
order now!
order status
order today
orders shipped by priority mail
outstanding values
over 18
over 21
pennies a day
people just leave money laying around
please read
potential earnings
print form signature
print out and fax
produced and sent out
profits
profits@
promise you …!
public@
pure profit
real thing
refinance home
removal instructions
remove in quotes
remove subject
removes wrinkles
reply remove subject
requires initial investment
reserves the right
reverses aging
risk free
round the world
s 1618
safeguard notice
sales@
satisfaction
satisfaction guaranteed
save $
save big money
save up to
score with babes
section 301
see for yourself
sent in compliance
serious cash
serious only
shopping spree
sign up free today
social security number
special promotion
stainless steel
stock alert
stock disclaimer statement
stock pick
stop snoring
strong buy
stuff on sale
subject to credit
success.
success@
supplies are limited
take action now
talks about hidden charges
talks about prizes
tells you it’s an ad
terms and conditions
the best rates
the following form
they keep your money — no refund!
they’re just giving it away
this isn’t junk
this isn’t spam
university diplomas
unlimited
unsecured credit/debt
urgent
us dollars
vacation offers
viagra and other drugs
wants credit card
we hate spam
we honor all
weekend getaway
what are you waiting for?
while supplies last
while you sleep
who really wins?
why pay more?
will not believe your eyes
winner
winning
work at home
you have been selected
your income



Google's co-founder Larry Page focused on developing the "perfect search engine," an algorithm that would understand what you mean when you are typing a search term and the search engine will give you the exact results that you were looking for.
In our last count, we can happily reveal that there were more than 120 different factors that effect the natural rankings by Google's latest algorithm. The latest algorithm was updated to penalise websites that use forums with negative feedback to boost their rankings.
Google's New Algorithm 2010 / 2011 Revealed
Many SEO Gurus have attempted to give a rough outline of what the Google algorithm might look like. Based on research and suggestions this might be how the formula basically could look like;
Google's Score = (Kw Usage Score * 0.3) + (Domain * 0.25) + (PR Score * 0.25) + (Inbound Link Score * 0.25) + (User Data * 0.1) + (Content Quality Score * 0.1) + (Manual Boosts) - (Automated & Manual Penalties)
 
If you are asking yourself how does the latest 2011 algorithm work or what does SEO mean, you will find on this website all the secrets you want revealed for absolutely free. Learn how the algorithm best works and the most common factors that effect your website ranking with this breakthrough revealed information.

Search Engine Optimisation Key Word Usage Factors
  • Keyword in title tags
  • Rich Header tags
  • Documents rich with relevant text information
  • Keyword in alt tag
  • Keywords in internal links pointing to the page
  • Keywords in the domain and/or URL
  • The order key words appear
Domain Strength / Speed
  • Domain Strength
  • Domain Speed
  • Local/specific domain name
  • Quality Hosting
Registration history
  • When the Domain was registered
  • Strength of the links pointing to the domain
  • The topical neighborhood of domain based on the in bound and out bound links
  • Historical use and links pattern to domain
  • Inbound links and referrals from social media
  • Inbound Link Score
Age & Quality of links
  • Quality of the domains sending links (non paid)
  • Links from Authority Websites
  • Quality of pages sending links and relevance
  • Anchor text of links
  • Link quantity/weight metric (Page Rank or a variation)
  • Subject matter of linking pages/sites
  • User Data
Historical CTR to page in SERPs
  • Time users spend on a page
  • Search requests for URL/domain
  • Historical visits/use of URL/domain by users
Content Quality Score
  • Duplicate content filter
  • Order of key words in content
  • Content update frequency
  • Highlighted and visibility of keywords
  • Usability and W3C rules
Negative Penalties (for SEO purposes)
  • Forums
  • Facebook
  • Twitter
  • Article Submissions
  • Blogs
  • Link Exchange
  • Paid Links
  • FFA's (Free for all backlinks)
Google over the years have persistently pursued innovation, constantly updated its algorithm or algorythm (if that is the way you spell it) and refused to accept the limitations of existing models.
As a result, Google's latest development was the breakthrough PageRank™ technology that changed the way searches are conducted today. And if there is one SEO company that knows how the algorithm works and how would it would boost your rankings on Google organic search that would be White Hat Works.
What's new about Google's latest algorithm in 2010 / 2011?
There is much talk in the SEO world about what Google is going to focus on in 2010. Matt Cutts, head of Google’s Webspam team has mentioned and hinted in various forums, Youtube and on his Blog that SEO professionals' latest focus should be in 2010. Part of the Caffeine project, apparently in 2010 the speed of your website and web pages loading will now play a major factor in the algorithm.
To achieve faster speed your website needs to be hosted on a super fast host and reducing the overall size of your web pages. Obviously this means moving to a better Internet Service Provider and serving faster websites by increasing the download speed.
This will mean less content on a page, utilising CSS (Cascading Style Sheets) and images have to load faster. A webpage downloading speed of 3 seconds or less is pretty good, but 1 second or lower might have to be achieved in cases. You can now measure the speed with a program in the Google's Webmaster Tools.
Does this mean Google in a way is pushing developers to write and create better websites that load faster and content writers to write quality content instead of quantity content? and the digital marketing guys, will they have to be at the same time creative but their art work must be short, sweat and fast loading? Does using flash animations go out the window? Hosting Video clips on your own website? would that be wise? or better to be hosted on You Tube instead?
All these are questions and probably the answers you are looking for is, yes it is!
What could happen is Google favouring certain websites and in the other hand penalise other ones. Websites owners that keep stuffing their pages with content, customer reviews, endless blogs and articles, videos, links and endless images will be penalised.
The good news is websites that are clean, focused, compatible and fast will benefit.
Many SEO professionals are saying that this is not fair for the SME's (small to medium sized business) that cannot afford to be hosted on super fast hosts and do not have large teams that can restructure their websites to go faster. The question is how fast can the cooperate companies react to the ever growing demands of the Internet? Probably not and it will take them months before they even decide what to do. In the other hand SME's probably will be in a better position to react a lot faster due to quick decision making compensating for all the other factors.
The SEO e-marketers also mention about countries that have slow internet access, like in Scotland and other rural areas where you cannot get broadband? Are they going to get penalised? Probably not, as its the Google Bots measuring the speed and not at what speed the website is loaded in these areas on an end users computer. Actually Google in a way by "forcing" developers to increase their websites overall speed would mean these areas with no broadband will benefit in the long run. Mobile phone users with their web enabled devices like the iPhone and Nexus by Google will be able to load more and more websites, specially the ones that are more lean and load faster.
White Hat Works Conclusion for 2010 and 2011:
  • Host your website on a super fast Internet Service Provider
  • Use CSS as much as possible and comply with website usability
  • Write quality content, not quantity content
  • Convert your a Flash website to a HTML website
  • Export your Flash animations where possible - more SEO Tips
  • Keep images and their size to a minimum without loosing the quality
  • Test your website in various browsers and mobile phone devices
  • Make your website Google Friendly
  • Get free guidance from a reputable SEO E-marketing Company
Faster websites means Google can keep up with the growth of the Internet without the need to keep buying and installing new servers. Many have asked, how many servers does Google have? This is a well kept secret, but if you do your maths and calculations the number of servers could be around 700,000 and growing. These servers are spread over 100 data-centres around the globe and makes Google the largest IT producer of (Co2) carbon emissions on Earth.
Google is in the business of providing a top notch service and encouraging growth, but at the same time be profitable by keeping their costs and carbon emissions down.
Hence the need for this new factor that will be included in the updated Google 2010 Algorythm. Maybe the main reason Google is focusing on speed is because what they really are trying to do is to reduce their carbon foot print and we (SEO guys and website owners) all need to help them achieve this.
One effected company said:
We are an ecommerce site that has been running for almost 5 months and have seen steadily growing traffic. We have made no major changes to our website recently but are continually adding content and products to the site. Our site has been severely affected (almost 50% down on traffic) since last Tuesday (07/12/10) which coincides with the latest 2011 algorithm change which was the response to the New York Times article regarding Black Hat SEO techniques.
I have struggled to find much information about it and it doesn't seem to be widely talked about at the moment. As far as I can tell it will specifically target site reviews and penalise people with negative feedback. Because we are a relatively young site we don't have any feedback (negative or otherwise). Is it visible that this could have such a dramatic effect on a site that has essentially done nothing wrong and has anyone had the same problem? 
Any advice would be greatly appreciated because we are being hit particularly hard in the lead up to Christmas which should be our busiest weeks.



Page Rank

Rank Checker (Ranking Tool) - Get a overview of your website's ranking
PageRank Lookup (PageRank Lookup - SEO Tools - Search Engine Optimization, Google Optimization) - check the PageRank for a website
Google PageRank Prediction (PageRank Prediction - Predict Page Rank Predictor) - check predicted PR of a site
Multi-Rank Checker (Rank Checker) - View your Google PageRank and Alexa Ranking in bulk
PageRank Checker (PageRank Checker - Check Your Google Page Rank) - View your Google PageRank on differnet Google servers

Links related

Reciprocal Link Check
(Reciprocal Link Checker) - check whether your link partners are linking back to your website
Link Popularity Checker (LinkWorth | LinkQuote - Text Link Quote) - a popularity score given to a website based on inbound links
LinkPrice Lookup (LinkWorth | LinkQuote - Text Link Quote) - check the price of your link
Link Price Calculator (Link Price Calculator) - another tool for checking the price of links
Link Checker (http://www.ranks.nl/cgi-bin/ranksnl/tools/checklink.pl) - check your links to see if it's still valid or not
Link Popularity (Link Popularity - SEO Tools - Search Engine Optimization, Google Optimization) - checks the total number of web pages which link to a website
Link Price Calculator (Link Price Calculator - SEO Tools - Search Engine Optimization, Google Optimization) - help to determine the approximate amount you should be paying (or charging) per month for a text link (ad) from each and every page of the specified website
Site Link Analyzer (Site Link Analyzer - SEO Tools - Search Engine Optimization, Google Optimization) - analyze a given web page and return a table of data containing columns of outbound links and their associated anchor text
URL Rewriting (URL Rewriting - SEO Tools - Search Engine Optimization, Google Optimization) - convert dynamic URLs into static looking HTML URLs
Link Extractor (Link Extractor) - Extract links from a specific web page
Link Shortener (Link Shortener - Short/Shortcut Link - Hits Counter) - shorten a web address
Backlink Anchor Text Analyzer (http://www.webconfs.comURLnchor-text-analysis.php) - check link text used by your backlinks to Link to your wesbite

Keyword related

Keyword Verification (Search Engine Placement Check - Marketleap Search Engine Verification Tool) - checks to see if your site is in the top three pages of a search engine result for a specific keyword
Keyword Density Analyser (Mark Horrell - Keyword density analyser) - another SEO tools for keywords
Keyword Cloud (Keyword Cloud - SEO Tools - Search Engine Optimization, Google Optimization) - a visual representation of keywords used on a website
Keyword Density (Keyword Density - SEO Tools - Search Engine Optimization, Google Optimization) - another SEO tool for checking keyword density
Keyword Difficulty Check (Keyword Difficulty Check - SEO Tools - Search Engine Optimization, Google Optimization) - see how difficult it would be to rank for specific keywords or phrases
Keyword Optimizer (Keyword Optimizer - SEO Tools - Search Engine Optimization, Google Optimization) - optimizer your keywords with this tool
Keyword Suggestion (Keyword Suggestion Tool) - Find related keywords matching your search


Sitemap

XML-Sitemaps (Create your Google Sitemap Online - XML Sitemaps Generator) - Build your Site Map online (XML, ROR, Text, HTML)
Gsitecrawler (Google Sitemap Generator for Windows :: GSiteCrawler) - Google (and Yahoo!) Sitemap Generator for Windows
Validate XML Sitemap (Google XML Sitemap Validator - XML Sitemaps Generator) - Search Engine Optimizion Tool for validating your xml sitemaps
Search Engines

Google Analytics (http://www.google.com/analytics/) - tells you everything about your visitors
Google Banned Checker (http://www.iwebtool.com/google_banned) - check whether a site is banned by Google or not
Search Engine Bot Simulator (http://www.xml-sitemaps.com/se-bot-simulator.html) - SEO Tool to simulate search engine parsing of webpages and display discovered links.
Indexed pages (http://www.seochat.com/seo-tools/indexed-pages/) - check the no. of indexed pages for your blog
Spider Simulator (http://www.seochat.com/seo-tools/spider-simulator/) - simulates a search engine by displaying the contents of a web page
Search Engine Friendly Redirect Checker (http://www.seochat.com/seo-tools/redirect-check/) - checks the exact HTTP headers that a web server is sending with an HTTP response.
Search Engine Position (http://www.iwebtool.com/search_engine_position) - Locate your search listings on Google and Yahoo!
Search Listings Preview (http://www.iwebtool.com/search_listings_preview) - Preview your website on Google, MSN and Yahoo! Search

HTML related

HTML Encrypt (http://www.iwebtool.com/html_encrypter) - Hide your HTML source code
HTML Optimizer (http://www.iwebtool.com/html_optimizer) - Optimize and clean your HTML source code
HTTP Headers (http://www.iwebtool.com/http_headers) - Extract the HTTP Headers of a web page
HTTP Headers Viewer (http://www.xml-sitemaps.com/http-headers-viewer.html) - check HTTP headers for any specific URL
Meta-tags Extractor (http://www.iwebtool.com/metatags_extractor) - Extract meta-tags information from a web page
Meta-tags Generator (http://www.iwebtool.com/metatags_generator) - Generate and configure your meta-tags
META Analyzer (http://www.seochat.com/seo-tools/meta-analyzer/) - analyze a website's meta tags
Meta Tag Generator (http://www.seochat.com/seo-tools/meta-tag-generator/) - help you to generate meta tags
Source Code Viewer (http://www.iwebtool.com/code_viewer) - View the source code of a page

Domain related

Alexa Traffic Rank (http://www.iwebtool.comURLlexa_traffic_rank) - View and compare Alexa Ranking graphs
Domain Age Tool (http://www.webconfs.com/domain-age.php) - find out the age of your competitor's domains
Domain Stats Tool (http://www.webconfs.com/domain-stats.php) - get all kind of statistics of your competitor's domains
Domain Availability (http://www.iwebtool.com/domain_availability) - Check the availability of domains
Domain Look-up (http://www.iwebtool.com/domain_lookup) - Retrieve a range of information about a domain
Domain Whois (http://www.iwebtool.com/whois) - Retrieve domain whois information
Instant Domain Checker (http://www.iwebtool.com/instant) - Check the availability of domains instantly
Ping Test (http://www.iwebtool.com/ping) - Check the presence of an active connection
Reverse IP/Look-up (http://www.iwebtool.com/reverse_ip) - Resolve a host to an IP address
Server Status (http://www.iwebtool.com/server_status) - Check if your website is online or offline
Website Speed Test (http://www.iwebtool.com/speed_test) - Find out how fast your website loads
What Is My IP Address (http://www.whatismyipaddress.com/) - shows your ip address
IP to City (http://www.webconfs.com/ip-to-city.php) - determine the Country, City, Latitude and Longitude of an IP Address
Website to Country (http://www.webconfs.com/website-to-country.php) - determine the Country in which the specified website is Hosted
Web stats

Statcounter (http://www.statcounter.com/) - famous free web tracker
HiStats (http://www.histats.com/) - Free, real time updated web stats service
Addfreestats (http://www.addfreestats.com/) - provide free website statistics

Miscellaneous

FEED Validator (http://www.feedvalidator.org/) - for Atom and RSS
W3C Markup Validation Service (http://validator.w3.org/) - check for conformance to W3C Recommendations and other standards
Kontera Ads Preview (http://www.webconfs.com/kontera-preview-tool.php) - preview Kontera Ads on your website
Online spell checker (http://www.markhorrell.com/tools/spellcheck.asp) - simple online spell checking tools
Browser Screen Resolution Checker (http://www.markhorrell.com/tools/browser.shtml) - shows what your site looks like with different screen resolutions
Your Browser Details (http://www.iwebtool.com/browser_details) - View your IP address and your browser details
Anonymous Emailer (http://www.iwebtool.comURLnonymous_emailer) - Send e-mails to users anonymously
md5 Encrypt (http://www.iwebtool.com/md5) - Encrypt text to MD5
Online Calculator (http://www.iwebtool.com/online_calculator) - A simple online calculator

Google

1. Reporting Spam to Google -http://www.google.com/contact/spamreport.html
2. Use Google to search your website - http://www.google.com/services/free.html
3. Submit your website to Google - http://www.google.com/addurl.html
4. Monitor Keyword Phrases - http://google.com/webalerts (This is neat to check out however does not help that much)
5. Googles Guidelines for Websmasters - http://www.google.com/webmasters/guidelines.html (A must read new people)
6. Facts for Webmasters - http://www.google.com/webmasters/facts.html
7. Having Trouble? Contact Google Directly - http://www.google.com/ads/offices.html
Website Design & Tools

1. Free Forms for your website TFMail - http://nms-cgi.sourceforge.net/
2. Validate Your HTML - http://validator.w3.org/
3. HTTP Error Code Meanings - http://www.searchengineworld.com/val...errorcodes.htm (http://www.searchengineworld.com/val...errorcodes.htm)
4. Keyword Tracking - http://www.digitalpoint.com/tools/keywords/
5. Link Checker - http://dev.w3.org/cvsweb/~checkout~/...0charset=utf-8 (http://dev.w3.org/cvsweb/%7Echeckout...0charset=utf-8) 6. Search Engine Relationship Chart - http://www.bruceclay.com/searchengin...nshipchart.htm (http://www.bruceclay.com/searchengin...nshipchart.htm)
Bruce Clay does an excellent job of keeping this updated.
7. Link Popularity Checker (Uptime Bot) - http://www.uptimebot.com/
8. Character Counting - http://a1portal.com/freetools/charcount.htm (This is great when optimizing your title or meta tags)
9. Character Encoding - http://www.itnews.org.uk/w_qrefs/w_i...p_charsets.cfm (http://www.itnews.org.uk/w_qrefs/w_i...p_charsets.cfm) (Ever wonder what those iso-8859-4 or utf-8 were or how to use them?)
10. Converting Hex to Dec or Vias Versa - http://www.hypersolutions.org/pages/hex.html#DectoHex
11. Ascii-Dec-Hex Conversion Code Chart - http://www.sonofsofaman.com/misc/ascii/default.asp
12. Ascii-HTML View Conversion Chart - http://a1portal.com/freetools/asciicodes.htm (This is an excellent resource when placing ascii code on your website. Remember to use the correct character encoding)
13. Ascii Chart in .GIF Format - http://www.jimprice.com/ascii-0-127.gif
14. Customer Focus Tool - http://www.futurenowinc.com/wewe.htm (Tells you whether your website is focused on your customers or not)
15. Dead Link Checker - http://www.dead-links.com/ (Doesn't crawl links within Frames or JavaScript)
16. Adsense Simulator - http://www.digitalpoint.com/tools/adsense-sandbox/ (This will give you an idea of what ads will be displayed on your website before you place them)
17. Google Page Rank Calculator - http://www.webworkshop.net/pagerank...lator.php3?pgs= (This is an advanced tool for finding out what you need to get your PR to the next level.)
18. Page Rank Finder - http://www.seo-guy.com/seo-tools/google-pr.php (This is a great tool to find quality websites with the PR that you are looking for to exchange websites with. This tool only looks at the home page not the link pages. This tool looks at 10 pages or 100 results)
19. Future Google PR - http://www.searchengineforums.com/ap...e/type:rphans/ (http://www.searchengineforums.com/ap...e/type:rphans/) - This is an article that tells you what datacenter your Google PR is udpated on first.
20. Keyword Analysis Tool - http://www.mcdar.net/ - This tool is a must. It's quick and easy to use
21. Keyword Density Analyzer - http://www.webjectives.com/keyword.htm
22. Keyword Difficulty Checker - http://www.searchguild.com/cgi-bin/difficulty.pl (You will need a Google API for this one)
23. Free Google API - http://www.google.com/api
24. Rocket Rank - http://www.rocketrank.com/ - This will only check the top 20 of the following SE's:
(All The Web DMOZ AltaVista Overture Excite Web Crawler HotBot Lycos What U Seek Yahoo)
Keyword Suggestion Tools:
25. WordTracker & Overture Suggestions http://www.digitalpoint.com/tools/suggestion/ - This is the best one of the three
26. Adwords Suggestion - https://adwords.google.com/select/m...=KeywordSandbox
27. Overture Suggestion - http://inventory.overture.com/d/sea...ory/suggestion/
28. Link Analyzer - http://www.scribbling.net/analyze-web-page-links Analyze the ratio of internal liinks vs. external links. This is a good tool when determining page rank leakage.
29. Link Appeal - http://www.webmaster-toolkit.com/link-appeal.shtml (Want to know whether or not you actually want your link on that page?)
30. Link City - http://showcase.netins.net/web/phdss/linkcity/ (This place has EVERY tool under the sun for everything you could ever possibly want)
31. Link Reputation - http://198.68.180.60/cgi-bin/link-reputation-tool.cgi (Reveals baclinks pointing to the target URL along with a link survey for eack backlink.)
32. Google PR Tools - http://www.thinkbling.com/tools.php (This guy has tons of fantastic tools. He is not as popular as some of the rest but the tools are great)
33. Protect Your e-mail address - http://www.fingerlakesbmw.org/main/flobfuscate.php (Obfuscates your e-mail so spambots don't pick it up from the Internet)
34. Digital Points Ad Network - http://www.digitalpoint.com/tools/ad-network/?s=2197 - After using all of the tools and more on this page. This has helped out the rankings faster than anything else.
35. Sandbox Detection Tool - http://www.socengine.com/seo/tools/sandbox-tool.php - Is your website being sandboxed?
36. Spider Simulation - http://www.submitexpress.com/analyzer/ - See what the spider sees on your website
37. SEO-Toys - http://seo-toys.com/ - These are some things that I had in my favorites. Some of them are okay.
38. Multiple SEO Tools - http://www.free-seo-tools.com/ - This website has a variety of misc. tools on it that you can use to better your search engine rankings.
39. Bot Spotter - http://sourceforge.net/projects/botspotter - This is a phenomenal script that will track what bots hit your website at what times. (Runs on PHP enabled websites)
40. Net Mechanic - http://www.netmechanic.com/toolbox/power_user.htm - This will break your website down and tell you any errors that you may be unaware of.
41. Statcounter - http://www.statcounter.com/ - This will track your clients throughout the dynamically created pages of your website. This is a free service.
42. Dr. HTML - http://www.fixingyourwebsite.com/drhtml.html - This will test your website for any errors that you may be unaware of and tell you how to fix them.
43. Page Rank Calculation - http://www.sitepronews.com/pagerank.html

Newsletters & Articles

1. Site Pro News - www.sitepronews.com (http://www.sitepronews.com/)
2. In Stat - http://www.instat.com/ (This has some decent insite)
3. Page Rank Explained - http://www.webworkshop.net/pagerank....olbar_pagerank (http://www.webworkshop.net/pagerank....olbar_pagerank)
4. Seach Engine Ratings and Reviews - http://searchenginewatch.com/reports/
5. Database of Robots - http://www.robotstxt.org/wc/active/html/index.html
(Ever wondered anything about the spiders that are out there?)
ISAPI Rewrites

1. URL Replacer - (Free) - http://www.motobit.com/help/url-repl...od-rewrite.asp (http://www.motobit.com/help/url-repl...od-rewrite.asp)
2. Mod Rewrite2 - ($39.90US) - http://www.iismods.com/url-rewrite/index.htm
3. URL Rewrite - (23.00EUR) - http://www.smalig.com/url_rewrite-en.htm

Link Exchanging

1. Links Manager ($20.00US /mo)- http://linksmanager.com/cgi-bin/cook/control_panel.cgi (This is great for the beginner however you will find out that you need to majorly adjust your pages manually in order to pread page rank throughout them otherwise you end up with 20 pages with no PR and 1 page with PR.)
2. Page Rank Finder - http://www.seo-guy.com/seo-tools/google-pr.php
3. Link Appeal - http://www.webmaster-toolkit.com/link-appeal.shtml

Search Engine Submissions

1. Submit Express - http://www.submitexpress.com/newsletters/dec_15_00.html (A lot of people utilize this service. I don't utilize it)
2. Alexa - http://pages.alexa.com/help/webmaste...tml#crawl_site (http://pages.alexa.com/help/webmaste...tml#crawl_site)
3. AOL - http://search.aol.com/aolcom/add.jsp
4. DMOZ Dummies Guide - http://www.dummies-guide-to-dmoz.or..._not_google.htm (http://www.dummies-guide-to-dmoz.or..._not_google.htm/)
5. DMOZ Instructions - http://dmoz.org/add.html
6. DMOZ Resource Forum - http://resource-zone.com/forum/showthread.php?t=396 (This is where you go when you website doesn't show up in DMOZ after you have submitted READ THEIR RULES FOR ASKING)
7. ExactSeek - http://www.exactseek.com/freemember.html
8. Google - http://www.google.com/addurl.html
9. Yahoo http://submit.search.yahoo.com/free/request (You must have an account)
10. Yahoo Directory Help - http://docs.yahoo.com/info/suggest/appropriate.html
11. Yahoo Express Submit TOS - https://ecom.yahoo.com/dir/express/terms
12. Yahoo Submit Help - http://help.yahoo.com/help/us/dir/su...uggest-01.html (http://help.yahoo.com/help/us/dir/su...uggest-01.html)
13. MSN - http://beta.search.msn.com/docs/submit.aspx?

ALink Reciprocal Link Checker (http://www.info-pack.com/alink/)
AMeta Meta Tag Editor (http://www.info-pack.com/ameta/)
XML Sitemap Maker (http://www.xmlsitemapmaker.com/)
RSS Feed Maker (http://www.rssfeedmaker.biz/)
Webpage Size Checker (http://www.info-pack.com/pagesize/)


Once in a while we get asked whether a site’s visibility in Google’s search results can be impacted in a negative way if it’s unavailable when Googlebot tries to crawl it. Sometimes downtime is unavoidable: a webmaster might decide to take a site down due to ongoing site maintenance, or legal or cultural requirements. Outages that are not clearly marked as such can negatively affect a site’s reputation. While we cannot guarantee any crawling, indexing or ranking, there are methods to deal with planned website downtime in a way that will generally not negatively affect your site’s visibility in the search results.

For example, instead of returning an HTTP result code 404 (Not Found) or showing an error page with the status code 200 (OK) when a page is requested, it’s better to return a 503 HTTP result code (Service Unavailable) which tells search engine crawlers that the downtime is temporary. Moreover, it allows webmasters to provide visitors and bots with an estimated time when the site will be up and running again. If known, the length of the downtime in seconds or the estimated date and time when the downtime will be complete can be specified in an optional Retry-After header, which Googlebot may use to determine when to recrawl the URL.

Returning a 503 HTTP result code can be a great solution for a number of other situations. We encounter a lot of problems with sites that return 200 (OK) result codes for server errors, downtime, bandwidth-overruns or for temporary placeholder pages (“Under Construction”). The 503 HTTP result code is the webmaster’s solution of choice for all these situations. As for planned server downtime like hardware maintenance, it’s a good idea to have a separate
server available to actually return the 503 HTTP result code. It is important, however, to not treat 503 as a permanent solution: lasting 503s can eventually be seen as a sign that the server is now permanently unavailable and can result in us removing URLs from Google’s index.

header('HTTP/1.1 503 Service Temporarily Unavailable');
header('Retry-After: Sat, 8 Oct 2011 18:27:00 GMT');

If you set up a 503 (Service Unavailable) response, the header information might look like this when using PHP.
Similar to how you can make 404 pages more useful to users, it’s also a good idea to provide a customized 503 message explaining the situation to users and letting them know when the site will be available again. For further information regarding HTTP result codes, please see RFC 2616.



10 Status Code Definitions

Each Status-Code is described below, including a description of which method(s) it can follow and any metainformation required in the response.

10.1 Informational 1xx

This class of status code indicates a provisional response, consisting only of the Status-Line and optional headers, and is terminated by an empty line. There are no required headers for this class of status code. Since HTTP/1.0 did not define any 1xx status codes, servers MUST NOT send a 1xx response to an HTTP/1.0 client except under experimental conditions.
A client MUST be prepared to accept one or more 1xx status responses prior to a regular response, even if the client does not expect a 100 (Continue) status message. Unexpected 1xx status responses MAY be ignored by a user agent.
Proxies MUST forward 1xx responses, unless the connection between the proxy and its client has been closed, or unless the proxy itself requested the generation of the 1xx response. (For example, if a
proxy adds a "Expect: 100-continue" field when it forwards a request, then it need not forward the corresponding 100 (Continue) response(s).)

10.1.1 100 Continue

The client SHOULD continue with its request. This interim response is used to inform the client that the initial part of the request has been received and has not yet been rejected by the server. The client SHOULD continue by sending the remainder of the request or, if the request has already been completed, ignore this response. The server MUST send a final response after the request has been completed. See section 8.2.3 for detailed discussion of the use and handling of this status code.

10.1.2 101 Switching Protocols

The server understands and is willing to comply with the client's request, via the Upgrade message header field (section 14.42), for a change in the application protocol being used on this connection. The server will switch protocols to those defined by the response's Upgrade header field immediately after the empty line which terminates the 101 response.
The protocol SHOULD be switched only when it is advantageous to do so. For example, switching to a newer version of HTTP is advantageous over older versions, and switching to a real-time, synchronous protocol might be advantageous when delivering resources that use such features.

10.2 Successful 2xx

This class of status code indicates that the client's request was successfully received, understood, and accepted.

10.2.1 200 OK

The request has succeeded. The information returned with the response is dependent on the method used in the request, for example:
GET an entity corresponding to the requested resource is sent in the response;
HEAD the entity-header fields corresponding to the requested resource are sent in the response without any message-body;
POST an entity describing or containing the result of the action;
TRACE an entity containing the request message as received by the end server.

10.2.2 201 Created

The request has been fulfilled and resulted in a new resource being created. The newly created resource can be referenced by the URI(s) returned in the entity of the response, with the most specific URI for the resource given by a Location header field. The response SHOULD include an entity containing a list of resource characteristics and location(s) from which the user or user agent can choose the one most appropriate. The entity format is specified by the media type given in the Content-Type header field. The origin server MUST create the resource before returning the 201 status code. If the action cannot be carried out immediately, the server SHOULD respond with 202 (Accepted) response instead.
A 201 response MAY contain an ETag response header field indicating the current value of the entity tag for the requested variant just created, see section 14.19.

10.2.3 202 Accepted

The request has been accepted for processing, but the processing has not been completed. The request might or might not eventually be acted upon, as it might be disallowed when processing actually takes place. There is no facility for re-sending a status code from an asynchronous operation such as this.
The 202 response is intentionally non-committal. Its purpose is to allow a server to accept a request for some other process (perhaps a batch-oriented process that is only run once per day) without requiring that the user agent's connection to the server persist until the process is completed. The entity returned with this response SHOULD include an indication of the request's current status and either a pointer to a status monitor or some estimate of when the user can expect the request to be fulfilled.

10.2.4 203 Non-Authoritative Information

The returned metainformation in the entity-header is not the definitive set as available from the origin server, but is gathered from a local or a third-party copy. The set presented MAY be a subset or superset of the original version. For example, including local annotation information about the resource might result in a superset of the metainformation known by the origin server. Use of this response code is not required and is only appropriate when the response would otherwise be 200 (OK).

10.2.5 204 No Content

The server has fulfilled the request but does not need to return an entity-body, and might want to return updated metainformation. The response MAY include new or updated metainformation in the form of entity-headers, which if present SHOULD be associated with the requested variant.
If the client is a user agent, it SHOULD NOT change its document view from that which caused the request to be sent. This response is primarily intended to allow input for actions to take place without causing a change to the user agent's active document view, although any new or updated metainformation SHOULD be applied to the document currently in the user agent's active view.
The 204 response MUST NOT include a message-body, and thus is always terminated by the first empty line after the header fields.

10.2.6 205 Reset Content

The server has fulfilled the request and the user agent SHOULD reset the document view which caused the request to be sent. This response is primarily intended to allow input for actions to take place via user input, followed by a clearing of the form in which the input is given so that the user can easily initiate another input action. The response MUST NOT include an entity.

10.2.7 206 Partial Content

The server has fulfilled the partial GET request for the resource. The request MUST have included a Range header field (section 14.35) indicating the desired range, and MAY have included an If-Range header field (section 14.27) to make the request conditional.
The response MUST include the following header fields:
- Either a Content-Range header field (section 14.16) indicating
        the range included with this response, or a multipart/byteranges
        Content-Type including Content-Range fields for each part. If a
        Content-Length header field is present in the response, its
        value MUST match the actual number of OCTETs transmitted in the
        message-body.
- Date
- ETag and/or Content-Location, if the header would have been sent
        in a 200 response to the same request
- Expires, Cache-Control, and/or Vary, if the field-value might
        differ from that sent in any previous response for the same
        variant
If the 206 response is the result of an If-Range request that used a strong cache validator (see section 13.3.3), the response SHOULD NOT include other entity-headers. If the response is the result of an If-Range request that used a weak validator, the response MUST NOT include other entity-headers; this prevents inconsistencies between cached entity-bodies and updated headers. Otherwise, the response MUST include all of the entity-headers that would have been returned with a 200 (OK) response to the same request.
A cache MUST NOT combine a 206 response with other previously cached content if the ETag or Last-Modified headers do not match exactly, see 13.5.4.
A cache that does not support the Range and Content-Range headers MUST NOT cache 206 (Partial) responses.

10.3 Redirection 3xx

This class of status code indicates that further action needs to be taken by the user agent in order to fulfill the request. The action required MAY be carried out by the user agent without interaction with the user if and only if the method used in the second request is GET or HEAD. A client SHOULD detect infinite redirection loops, since such loops generate network traffic for each redirection.
Note: previous versions of this specification recommended a
      maximum of five redirections. Content developers should be aware
      that there might be clients that implement such a fixed
      limitation.

10.3.1 300 Multiple Choices

The requested resource corresponds to any one of a set of representations, each with its own specific location, and agent- driven negotiation information (section 12) is being provided so that the user (or user agent) can select a preferred representation and redirect its request to that location.
Unless it was a HEAD request, the response SHOULD include an entity containing a list of resource characteristics and location(s) from which the user or user agent can choose the one most appropriate. The entity format is specified by the media type given in the Content- Type header field. Depending upon the format and the capabilities of
the user agent, selection of the most appropriate choice MAY be performed automatically. However, this specification does not define any standard for such automatic selection.
If the server has a preferred choice of representation, it SHOULD include the specific URI for that representation in the Location field; user agents MAY use the Location field value for automatic redirection. This response is cacheable unless indicated otherwise.

10.3.2 301 Moved Permanently

The requested resource has been assigned a new permanent URI and any future references to this resource SHOULD use one of the returned URIs. Clients with link editing capabilities ought to automatically re-link references to the Request-URI to one or more of the new references returned by the server, where possible. This response is cacheable unless indicated otherwise.
The new permanent URI SHOULD be given by the Location field in the response. Unless the request method was HEAD, the entity of the response SHOULD contain a short hypertext note with a hyperlink to the new URI(s).
If the 301 status code is received in response to a request other than GET or HEAD, the user agent MUST NOT automatically redirect the request unless it can be confirmed by the user, since this might change the conditions under which the request was issued.
Note: When automatically redirecting a POST request after
      receiving a 301 status code, some existing HTTP/1.0 user agents
      will erroneously change it into a GET request.

10.3.3 302 Found

The requested resource resides temporarily under a different URI. Since the redirection might be altered on occasion, the client SHOULD continue to use the Request-URI for future requests. This response is only cacheable if indicated by a Cache-Control or Expires header field.
The temporary URI SHOULD be given by the Location field in the response. Unless the request method was HEAD, the entity of the response SHOULD contain a short hypertext note with a hyperlink to the new URI(s).
If the 302 status code is received in response to a request other than GET or HEAD, the user agent MUST NOT automatically redirect the request unless it can be confirmed by the user, since this might change the conditions under which the request was issued.
Note: RFC 1945 and RFC 2068 specify that the client is not allowed
      to change the method on the redirected request.  However, most
      existing user agent implementations treat 302 as if it were a 303
      response, performing a GET on the Location field-value regardless
      of the original request method. The status codes 303 and 307 have
      been added for servers that wish to make unambiguously clear which
      kind of reaction is expected of the client.

10.3.4 303 See Other

The response to the request can be found under a different URI and SHOULD be retrieved using a GET method on that resource. This method exists primarily to allow the output of a POST-activated script to redirect the user agent to a selected resource. The new URI is not a substitute reference for the originally requested resource. The 303 response MUST NOT be cached, but the response to the second (redirected) request might be cacheable.
The different URI SHOULD be given by the Location field in the response. Unless the request method was HEAD, the entity of the response SHOULD contain a short hypertext note with a hyperlink to the new URI(s).
Note: Many pre-HTTP/1.1 user agents do not understand the 303
      status. When interoperability with such clients is a concern, the
      302 status code may be used instead, since most user agents react
      to a 302 response as described here for 303.

10.3.5 304 Not Modified

If the client has performed a conditional GET request and access is allowed, but the document has not been modified, the server SHOULD respond with this status code. The 304 response MUST NOT contain a message-body, and thus is always terminated by the first empty line after the header fields.
The response MUST include the following header fields:
- Date, unless its omission is required by section 14.18.1
If a clockless origin server obeys these rules, and proxies and clients add their own Date to any response received without one (as already specified by [RFC 2068], section 14.19), caches will operate correctly.
- ETag and/or Content-Location, if the header would have been sent
        in a 200 response to the same request
- Expires, Cache-Control, and/or Vary, if the field-value might
        differ from that sent in any previous response for the same
        variant
If the conditional GET used a strong cache validator (see section 13.3.3), the response SHOULD NOT include other entity-headers. Otherwise (i.e., the conditional GET used a weak validator), the response MUST NOT include other entity-headers; this prevents inconsistencies between cached entity-bodies and updated headers.
If a 304 response indicates an entity not currently cached, then the cache MUST disregard the response and repeat the request without the conditional.
If a cache uses a received 304 response to update a cache entry, the cache MUST update the entry to reflect any new field values given in the response.

10.3.6 305 Use Proxy

The requested resource MUST be accessed through the proxy given by the Location field. The Location field gives the URI of the proxy. The recipient is expected to repeat this single request via the proxy. 305 responses MUST only be generated by origin servers.
Note: RFC 2068 was not clear that 305 was intended to redirect a
      single request, and to be generated by origin servers only.  Not
      observing these limitations has significant security consequences.

10.3.7 306 (Unused)

The 306 status code was used in a previous version of the specification, is no longer used, and the code is reserved.

10.3.8 307 Temporary Redirect

The requested resource resides temporarily under a different URI. Since the redirection MAY be altered on occasion, the client SHOULD continue to use the Request-URI for future requests. This response is only cacheable if indicated by a Cache-Control or Expires header field.
The temporary URI SHOULD be given by the Location field in the response. Unless the request method was HEAD, the entity of the response SHOULD contain a short hypertext note with a hyperlink to the new URI(s) , since many pre-HTTP/1.1 user agents do not understand the 307 status. Therefore, the note SHOULD contain the information necessary for a user to repeat the original request on the new URI.
If the 307 status code is received in response to a request other than GET or HEAD, the user agent MUST NOT automatically redirect the request unless it can be confirmed by the user, since this might change the conditions under which the request was issued.

10.4 Client Error 4xx

The 4xx class of status code is intended for cases in which the client seems to have erred. Except when responding to a HEAD request, the server SHOULD include an entity containing an explanation of the error situation, and whether it is a temporary or permanent condition. These status codes are applicable to any request method. User agents SHOULD display any included entity to the user.
If the client is sending data, a server implementation using TCP SHOULD be careful to ensure that the client acknowledges receipt of the packet(s) containing the response, before the server closes the input connection. If the client continues sending data to the server after the close, the server's TCP stack will send a reset packet to the client, which may erase the client's unacknowledged input buffers before they can be read and interpreted by the HTTP application.

10.4.1 400 Bad Request

The request could not be understood by the server due to malformed syntax. The client SHOULD NOT repeat the request without modifications.

10.4.2 401 Unauthorized

The request requires user authentication. The response MUST include a WWW-Authenticate header field (section 14.47) containing a challenge applicable to the requested resource. The client MAY repeat the request with a suitable Authorization header field (section 14.8). If the request already included Authorization credentials, then the 401 response indicates that authorization has been refused for those credentials. If the 401 response contains the same challenge as the prior response, and the user agent has already attempted authentication at least once, then the user SHOULD be presented the entity that was given in the response, since that entity might include relevant diagnostic information. HTTP access authentication is explained in "HTTP Authentication: Basic and Digest Access Authentication" [43].

10.4.3 402 Payment Required

This code is reserved for future use.

10.4.4 403 Forbidden

The server understood the request, but is refusing to fulfill it. Authorization will not help and the request SHOULD NOT be repeated. If the request method was not HEAD and the server wishes to make public why the request has not been fulfilled, it SHOULD describe the reason for the refusal in the entity. If the server does not wish to make this information available to the client, the status code 404 (Not Found) can be used instead.

10.4.5 404 Not Found

The server has not found anything matching the Request-URI. No indication is given of whether the condition is temporary or permanent. The 410 (Gone) status code SHOULD be used if the server knows, through some internally configurable mechanism, that an old resource is permanently unavailable and has no forwarding address. This status code is commonly used when the server does not wish to reveal exactly why the request has been refused, or when no other response is applicable.

10.4.6 405 Method Not Allowed

The method specified in the Request-Line is not allowed for the resource identified by the Request-URI. The response MUST include an Allow header containing a list of valid methods for the requested resource.

10.4.7 406 Not Acceptable

The resource identified by the request is only capable of generating response entities which have content characteristics not acceptable according to the accept headers sent in the request.
Unless it was a HEAD request, the response SHOULD include an entity containing a list of available entity characteristics and location(s) from which the user or user agent can choose the one most appropriate. The entity format is specified by the media type given in the Content-Type header field. Depending upon the format and the capabilities of the user agent, selection of the most appropriate choice MAY be performed automatically. However, this specification does not define any standard for such automatic selection.
Note: HTTP/1.1 servers are allowed to return responses which are
      not acceptable according to the accept headers sent in the
      request. In some cases, this may even be preferable to sending a
      406 response. User agents are encouraged to inspect the headers of
      an incoming response to determine if it is acceptable.
If the response could be unacceptable, a user agent SHOULD temporarily stop receipt of more data and query the user for a decision on further actions.

10.4.8 407 Proxy Authentication Required

This code is similar to 401 (Unauthorized), but indicates that the client must first authenticate itself with the proxy. The proxy MUST return a Proxy-Authenticate header field (section 14.33) containing a challenge applicable to the proxy for the requested resource. The client MAY repeat the request with a suitable Proxy-Authorization header field (section 14.34). HTTP access authentication is explained in "HTTP Authentication: Basic and Digest Access Authentication" [43].

10.4.9 408 Request Timeout

The client did not produce a request within the time that the server was prepared to wait. The client MAY repeat the request without modifications at any later time.

10.4.10 409 Conflict

The request could not be completed due to a conflict with the current state of the resource. This code is only allowed in situations where it is expected that the user might be able to resolve the conflict and resubmit the request. The response body SHOULD include enough
information for the user to recognize the source of the conflict. Ideally, the response entity would include enough information for the user or user agent to fix the problem; however, that might not be possible and is not required.
Conflicts are most likely to occur in response to a PUT request. For example, if versioning were being used and the entity being PUT included changes to a resource which conflict with those made by an earlier (third-party) request, the server might use the 409 response to indicate that it can't complete the request. In this case, the response entity would likely contain a list of the differences between the two versions in a format defined by the response Content-Type.

10.4.11 410 Gone

The requested resource is no longer available at the server and no forwarding address is known. This condition is expected to be considered permanent. Clients with link editing capabilities SHOULD delete references to the Request-URI after user approval. If the server does not know, or has no facility to determine, whether or not the condition is permanent, the status code 404 (Not Found) SHOULD be used instead. This response is cacheable unless indicated otherwise.
The 410 response is primarily intended to assist the task of web maintenance by notifying the recipient that the resource is intentionally unavailable and that the server owners desire that remote links to that resource be removed. Such an event is common for limited-time, promotional services and for resources belonging to individuals no longer working at the server's site. It is not necessary to mark all permanently unavailable resources as "gone" or to keep the mark for any length of time -- that is left to the discretion of the server owner.

10.4.12 411 Length Required

The server refuses to accept the request without a defined Content- Length. The client MAY repeat the request if it adds a valid Content-Length header field containing the length of the message-body in the request message.

10.4.13 412 Precondition Failed

The precondition given in one or more of the request-header fields evaluated to false when it was tested on the server. This response code allows the client to place preconditions on the current resource metainformation (header field data) and thus prevent the requested method from being applied to a resource other than the one intended.

10.4.14 413 Request Entity Too Large

The server is refusing to process a request because the request entity is larger than the server is willing or able to process. The server MAY close the connection to prevent the client from continuing the request.
If the condition is temporary, the server SHOULD include a Retry- After header field to indicate that it is temporary and after what time the client MAY try again.

10.4.15 414 Request-URI Too Long

The server is refusing to service the request because the Request-URI is longer than the server is willing to interpret. This rare condition is only likely to occur when a client has improperly converted a POST request to a GET request with long query information, when the client has descended into a URI "black hole" of redirection (e.g., a redirected URI prefix that points to a suffix of itself), or when the server is under attack by a client attempting to exploit security holes present in some servers using fixed-length buffers for reading or manipulating the Request-URI.

10.4.16 415 Unsupported Media Type

The server is refusing to service the request because the entity of the request is in a format not supported by the requested resource for the requested method.

10.4.17 416 Requested Range Not Satisfiable

A server SHOULD return a response with this status code if a request included a Range request-header field (section 14.35), and none of the range-specifier values in this field overlap the current extent of the selected resource, and the request did not include an If-Range request-header field. (For byte-ranges, this means that the first- byte-pos of all of the byte-range-spec values were greater than the current length of the selected resource.)
When this status code is returned for a byte-range request, the response SHOULD include a Content-Range entity-header field specifying the current length of the selected resource (see section 14.16). This response MUST NOT use the multipart/byteranges content- type.

10.4.18 417 Expectation Failed

The expectation given in an Expect request-header field (see section 14.20) could not be met by this server, or, if the server is a proxy, the server has unambiguous evidence that the request could not be met by the next-hop server.

10.5 Server Error 5xx

Response status codes beginning with the digit "5" indicate cases in which the server is aware that it has erred or is incapable of performing the request. Except when responding to a HEAD request, the server SHOULD include an entity containing an explanation of the error situation, and whether it is a temporary or permanent condition. User agents SHOULD display any included entity to the user. These response codes are applicable to any request method.

10.5.1 500 Internal Server Error

The server encountered an unexpected condition which prevented it from fulfilling the request.

10.5.2 501 Not Implemented

The server does not support the functionality required to fulfill the request. This is the appropriate response when the server does not recognize the request method and is not capable of supporting it for any resource.

10.5.3 502 Bad Gateway

The server, while acting as a gateway or proxy, received an invalid response from the upstream server it accessed in attempting to fulfill the request.

10.5.4 503 Service Unavailable

The server is currently unable to handle the request due to a temporary overloading or maintenance of the server. The implication is that this is a temporary condition which will be alleviated after some delay. If known, the length of the delay MAY be indicated in a Retry-After header. If no Retry-After is given, the client SHOULD handle the response as it would for a 500 response.
Note: The existence of the 503 status code does not imply that a
      server must use it when becoming overloaded. Some servers may wish
      to simply refuse the connection.

10.5.5 504 Gateway Timeout

The server, while acting as a gateway or proxy, did not receive a timely response from the upstream server specified by the URI (e.g. HTTP, FTP, LDAP) or some other auxiliary server (e.g. DNS) it needed to access in attempting to complete the request.
Note: Note to implementors: some deployed proxies are known to
      return 400 or 500 when DNS lookups time out.

10.5.6 505 HTTP Version Not Supported

The server does not support, or refuses to support, the HTTP protocol version that was used in the request message. The server is indicating that it is unable or unwilling to complete the request using the same major version as the client, as described in section 3.1, other than with this error message. The response SHOULD contain an entity describing why that version is not supported and what other protocols are supported by that server.