SEO Help


Search Engine Optimization Overview
SEO Market Analysis

Search Engine Optimization (SEO) or Search Engine Marketing (SEM) is the process of improving the volume and quality of traffic to a web site from search engines via "natural" ("organic" or "algorithmic") search results for targeted keywords.
Usually, the earlier a site is presented in the search results or the higher it "ranks", the more searchers will visit that site. SEO can also target different kinds of search, including image search, local search, and industry-specific vertical search engines. As a marketing strategy for increasing a site's relevance, SEO considers how search algorithms work and what people search for. SEO efforts may involve a site's coding, presentation, and structure, as well as fixing problems that could prevent search engine indexing programs from fully spidering a site. Other, more noticeable efforts may include adding unique content to a site, ensuring that content is easily indexed by search engine robots, and making the site more appealing to users.

The initialism "SEO" can also refer to "search engine optimizers", terms adopted by an industry of consultants who carry out optimization projects on behalf of clients, and by employees who perform SEO services in-house. Search engine optimizers may offer SEO as a stand-alone service or as a part of a broader marketing campaign. Because effective SEO may require changes to the HTML source code of a site, SEO tactics may be incorporated into web site development and design. The term "search engine friendly" may be used to describe web site designs, menus, content management systems, URLs, and shopping carts that are easy to optimize.


The History of Search Engine Optimization

Webmasters and content providers began optimizing sites for search engines in the mid-1990s, as the first search engines were cataloging the early Web. Initially, all a webmaster needed to do was submit a page, or URL, to the various engines which would send a spider to "crawl" that page, extract links to other pages from it, and return information found on the page to be indexed. Site owners started to recognize the value of having their sites highly ranked and visible in search engine results. The earliest known use of the phrase "search engine optimization" was a spam message posted on Usenet on July 26, 1997.
In 1998 Google was born and attracted a loyal following among the growing number of Internet users, who liked its simple design. Off-page factors such as PageRank and hyperlink analysis were considered, as well as on-page factors, to enable Google to avoid the kind of manipulation seen in search engines that only considered on-page factors for their rankings. Although PageRank was more difficult to game, webmasters had already developed link building tools and schemes to influence the search engine, and these methods proved similarly applicable to gaining PageRank. Many sites focused on exchanging, buying, and selling links, often on a massive scale. Some of these schemes, or link farms, involved the creation of thousands of sites for the sole purpose of link spamming.


Search Engine Optimization As A Marketing Strategy

Eye tracking studies have shown that searchers scan a search results page from top to bottom and left to right (for left to right languages), looking for a relevant result. Placement at or near the top of the rankings therefore increases the number of searchers who will visit a site.
At Blackwood Productions we believe that a successful Internet marketing campaign is essential for any company wishing to sell products into their preferred market. While paid advertising through Google Adwords type options should be considered the ever increasing cost and competition has resulted in the need for organic listings.
 Search Engine Optimization (SEO) is often considered the more technical part of Web marketing. This is true because SEO does help in the promotion of sites and at the same time it requires some technical knowledge – at least familiarity with basic HTML. SEO is sometimes also called SEO copyrighting because most of the techniques that are used to promote sites in search engines deal with text. Generally, SEO can be defined as the activity of optimizing Web pages or whole sites in order to make them more search engine-friendly, thus getting higher positions in search results.
One of the basic truths in SEO is that even if you do all the things that are necessary to do, this does not automatically guarantee you top ratings but if you neglect basic rules, this certainly will not go unnoticed. Also, if you set realistic goals – i.e to get into the top 30 results in Google for a particular keyword, rather than be the number one for 10 keywords in 5 search engines, you will feel happier and more satisfied with your results.
Although SEO helps to increase the traffic to one's site, SEO is not advertising. Of course, you can be included in paid search results for given keywords but basically the idea behind the SEO techniques is to get top placement because your site is relevant to a particular search term, not because you pay. SEO can be a 30-minute job or a permanent activity. Sometimes it is enough to do some generic SEO in order to get high in search engines – for instance, if you are a leader for rare keywords, then you do not have a lot to do in order to get decent placement. But in most cases, if you really want to be at the top, you need to pay special attention to SEO and devote significant amounts of time and effort to it. Even if you plan to do some basic SEO, it is essential that you understand how search engines work and which items are most important in SEO.


1. How Search Engines Work

The first basic truth you need to learn about SEO is that search engines are not humans. While this might be obvious for everybody, the differences between how humans and search engines view web pages aren't. Unlike humans, search engines are text-driven. Although technology advances rapidly, search engines are far from intelligent creatures that can feel the beauty of a cool design or enjoy the sounds and movement in movies. Instead, search engines crawl the Web, looking at particular site items (mainly text) to get an idea what a site is about. This brief explanation is not the most precise because as we will see next, search engines perform several activities in order to deliver search results – crawling, indexing, processing, calculating relevancy, and retrieving.
First, search engines crawl the Web to see what is there. This task is performed by e piece of software, called a crawler or a spider (or Googlebot, as is the case with Google). Spiders follow links from one page to another and index everything they find on their way. Having in mind the number of pages on the Web (over 20 billion), it is impossible for a spider to visit a site daily just to see if a new page has appeared or if an existing page has been modified. Sometimes crawlers will not visit your site for a month or two, so during this time your SEO efforts will not be rewarded. But there is nothing you can do about it, so just keep quiet.
What you can do is to check what a crawler sees from your site. As already mentioned, crawlers are not humans and they do not see images, Flash movies, JavaScript, frames, password-protected pages and directories, so if you have tons of these on your site, you'd better run the Spider Simulator below to see if these goodies are viewable by the spider. If they are not viewable, they will not be spidered, not indexed, not processed, etc. - in a word they will be non-existent for search engines.


2. Differences Between the Major Search Engines

Although the basic principle of operation of all search engines is the same, the minor differences between them lead to major changes in results relevancy. For different search engines different factors are important. There were times, when SEO experts joked that the algorithms of Yahoo! are intentionally made just the opposite of those of Google. While this might have a grain of truth, it is a matter a fact that the major search engines like different stuff and if you plan to conquer more than one of them, you need to optimize carefully.
There are many examples of the differences between search engines. For instance, for Yahoo! and MSN, on-page keyword factors are of primary importance, while for Google links are very, very important. Also, for Google sites are like wine – the older, the better, while Yahoo! generally has no expressed preference towards sites and domains with tradition (i.e. older ones). Thus you might need more time till your site gets mature to be admitted to the top in Google, than in Yahoo!.


What is White Hat SEO?

Published by Rob at 8:25 am under SEO news
A basic definition for Search Engine Optimization or SEO is that it the practice of optimizing a web page or a whole web site in a way that would make it more search engine friendly. This is done primarily to effect a higher ranking in search engine result pages and increase web traffic for the page or site. SEO requires an extensive knowledge of how search engines and the Internet works. It involves a variety of both technical and creative techniques. These techniques usually differ depending on the approved or preferred practices observed by different SEO firms.
It is these differences in SEO practices that such classifications as “white hat SEO” and “black hat SEO” came about. Traditionally, the term white hat was used to refer to ethical hackers who hack IT systems to keep them secure and protected. In SEO terms, white hat SEO practices are those that follows rules and policies set by search engines themselves, while black hat SEO refers to aggressive practices that “manipulate” search engine and disregard human audiences. Most black hat practices violate rules set by search engines. Spamming, which is basically what black hat SEO is about, presents several risks. The biggest risk is of a website being banned from search engines. Black hat practices are just all about “quick fixes” – they can give you a good result one moment and get you completely out of the running the next.
White hat SEO, on the other hand, produces long-term results that can make a website more credible. Examples of white hat practices include backlinking and link building. Writing content targeted at human readers is also an important white hat practice. Most people refer to white hat practices as ethical SEO.
Reliable search engine optimization firms like Blackwood Productions make use only of white hat practices for a number of reasons. First and foremost, white hat SEO practices produce more long term results, which has a lot to do with search engines becoming smarter over time. White hat practices, which are the “legal” as defined by most search engines, build strong and credible websites which would stand most changes in how search engines index and rank. Moreover, websites built and optimized by means of white Hat SEO practices gain solid links over time –links that would continue to reap good results for it in the years to come. This is perhaps the best advantage of white hat practices: they can promise stability of a website and consequently, an online business.

Text-to-Code Ratio and SEO

Published by Rob at 4:32 am under SEO news
Text to code ratio generally refers to how much of your webpage is text. A tool is typically used to extract the text from a paragraph as well as the anchor texts from HTML codes to find out the percentage ratio of text. It’s highly relevant to SEO because many search engines are known to use this information to gauge the relevance of a particular site. While not all search engines are particular about this detail as far as index algorithm is concerned, having more text in your pages is a good SEO strategy.
Finding out your site’s text to code ratio is a simple matter of using one of those online tools where you’ll be asked to enter your URL, hit enter, and see the data that you’re looking for.
So maybe it’s high time that you cleaned up your pages and made sure that you have fewer codes in them. There has been a lot of evidence that suggest that this practice can actually positively affect search engine rank.
The thing about having “cleaner” pages is that it makes it easier for search engines to identify whether or not your page is relevant for a particular search. For instance, using more semantic code to reduce the amount of HTML code in your page would allow the search engine spiders to find out what part of the page they stumbled into. It’s easier for them to know that they are looking at your header, which makes them think about relevance.
But then again, there are still a lot of people who are more inclined to believe that text-to-code ratio is not necessarily an essential ingredient to decent search engine rank. There are a lot of sites that actually rank good in various searches and yet they don’t really contain a lot of text in them. In an interview by WebPro News Vanessa Fox of Google said that this detail doesn’t really seem to matter a lot.
“This point I’ve seen crop up so many times, and each and every time I say – it doesn’t matter! One of my first sites was created in Frontpage with absolutely shocking code and it ranks fine, even for searches with 100 million+ results.”
Therefore, for Google at least, text to code ratio isn’t so important.
The middle ground in all this is that, no, it’s not so vital to SEO but the concept of cleaning up your codes in your pages is a good thing as it allows your pages to load better so you make both the search engines and your visitors happy.

 

II. Keywords – the Most Important Item in SEO

Keywords are the most important SEO item for every search engine – actually they are what search strings are matched against. So you see that it is very important that you optimize your site for the right keywords. This seems easy at first but when you get into more detail, it might be a bit confusing to correctly determine the keywords. But with a little research and thinking the problem of selecting the right keywords to optimize for can be solved.

1. Choosing the Right Keywords to Optimize For

It seems that the time when you could easily top the results for a one-word search string is centuries ago. Now, when the Web is so densely populated with sites, it is next to impossible to achieve constant top ratings for a one-word search string. Achieving constant top ratings for two-word or three-word search strings is a more realistic goal. If you examine closely the dynamics of search results for popular one-word keywords, you might notice that it is so easy one week to be in the first ten results and the next one– to have fallen out of the first 30 results because the competition for popular one-word keywords is so fierce and other sites have replaced you.
Of course, you can include one-word strings in your keywords list but if they are not backed up by more expressions, do not dream of high ratings. For instance, if you have a site about dogs, “dog” is a mandatory keyword but if you do not optimize for more words, like “dog owners”, “dog breeds”, “dog food”, or even “canine”, success is unlikely, especially for such a popular keyword. The examples given here are by no means the ultimate truth about how to optimize a dog site but they are good enough to show that you need to think broad when choosing the keywords.
Generally, when you start optimization, the first thing you need to consider is the keywords that describe the content of your site best and that are most likely to be used by users to find you. Ideally, you know your users well and can guess correctly what search strings they are likely to use to search for you. One issue to consider is synonyms. Very often users will use a different word for the same thing. For instance, in the example with the dog site, “canine” is a synonym and it is for sure that there will be users who will use it, so it does not hurt to include it now and then on your pages. But do not rush to optimize for every synonym you can think of – search engines themselves have algorithms that include synonyms in the keyword match, especially for languages like English.
Instead, think of more keywords that are likely to be used to describe your site. Thinking thematically is especially good because search engines tend to rate a page higher if it belongs to a site the theme of which fits into the keyword string. In this aspect it is important that your site is concentrated around a particular theme – i.e. dogs. It might be difficult to think of all the relevant keywords on your own but that is why tools are for. For instance, the Website Keyword Suggestions Tool below can help you to see how search engines determine the theme of your web site and what keywords fit into this theme. You can also try Google's Keyword Tool (https://adwords.google.com/select/KeywordToolExternal) to get more suggestions about which keywords are hot and which are not. When choosing the keywords to optimize for, you need to consider not only their relevancy to your site and the expected monthly number of searches for these particular keywords. Very often narrow searches are more valuable because the users that come to your site are those that are really interested in your product. If we go on with the dog example, you might discover that the “adopt a dog” keyphrase brings you more visitors because you have a special section on your site where you give advice on what to look for when adopting a dog. This page is not of interest of current dog owners but to potential dog owners only, who might be not so many in number but are your target audience and the overall effect of attracting this niche can be better than attracting everybody who is interested in dogs in general. So, when you look at the numbers of search hits per month, consider the unique hits that fit into the theme of your site.

2. Keyword Density

After you have chosen the keywords that describe your site and are supposedly of interest to your users, the next step is to make your site keyword-rich and to have good keyword density for your target keywords. Keyword density is a common measure of how relevant a page is. Generally, the idea is that the higher the keyword density, the more relevant to the search string a page is. The recommended density is 3-7% for the major 2 or 3 keywords and 1-2% for minor keywords. Try the Keyword Density Checker below to determine the keyword density of your website. Although there are no strict rules, try optimizing for a reasonable number of keywords – 5 or 10 is OK. If you attempt to optimize for a list of 300, you will soon see that it is just not possible to have a good keyword density for more than a few keywords, without making the text sound artificial and stuffed with keywords. And what is worse, there are severe penalties (including ban from the search engine) for keyword stuffing because this is considered an unethical practice that tries to manipulate search results.

3. Keywords in Special Places

Keywords are very important not only as quantity but as quality as well – i.e. if you have more keywords in the page title, the headings, the first paragraphs – this counts more that if you have many keywords at the bottom of the page. The reason is that the URL (and especially the domain name), file names and directory names, the page title, the headings for the separate sections are more important than ordinary text on the page and therefore, all equal, if you have the same keyword density as your competitors but you have keywords in the URL, this will boost your ranking incredibly, especially with Yahoo!.

a. Keywords in URLs and File Names

The domain name and the whole URL of a site tell a lot about it. The presumption is that if your site is about dogs, you will have “dog”, “dogs”, or “puppy” as part of your domain name. For instance, if your site is mainly about adopting dogs, it is much better to name your dog site “dog-adopt.net” than “animal-care.org”, for example, because in the first case you have two major keywords in the URL, while in the second one you have no more than one potential minor keyword.
When hunting for keyword rich domain names, don't get greedy. While from a SEO point of view it is better to have 5 keywords in the URL, just imagine how long and difficult to memorize the URL will be. So you need to strike a balance between the keywords in the URL and site usability, which says that more than 3 words in the URL is a way too much.
Probably you will not be able to come on your own with tons of good suggestions. Additionally, even if you manage to think of a couple of good domain names, they might be already taken. In such cases tools like the Tool below can come very handy.
File names and directory names are also important. Often search engines will give preference to pages that have a keyword in the file name. For instance http://mydomain.com/dog-adopt.html is not as good as http://dog-adopt.net/dog-adopt.html but is certainly better than http://mydomain.com/animal-care.html. The advantage of keywords in file names over keywords in URLs is that they are easier to change, if you decide to move to another niche, for example.

b. Keywords in Page Titles

The page title is another special place because the contents of the <title> tag usually gets displayed in most search engines, (including Google). While it is not mandatory per the HTML specification to write something in the <title> tag (i.e. you can leave it empty and the title bar of the browser will read “Untitled Document” or similar), for SEO purposes you may not want to leave the <title> tag empty; instead, you'd better write the the page title in it.
Unlike URLs, with page titles you can get wordy. If we go on with the dog example, the <title> tag of the home page for the http://dog-adopt.net can include something like this: <title>Adopt a Dog – Save a Life and Bring Joy to Your Home</title>, <title>Everything You Need to Know About Adopting a Dog</title> or even longer.

c. Keywords in Headings

Normally headings separate paragraphs into related subtopics and from a literary point of view, it may be pointless to have a heading after every other paragraph but from SEO point of view it is extremely good to have as many headings on a page as possible, especially if they have the keywords in them.
There are no technical length limits for the contents of the <h1>, <h2>, <h3>, ... <hn> tags but common sense says that too long headings are bad for page readability. So, like with URLs, you need to be wise with the length of headings. Another issue you need to consider is how the heading will be displayed. If it is Heading 1 (<h1>), generally this means larger font size and in this case it is recommendable to have less than 7-8 words in the heading, otherwise it might spread on 2 or 3 lines, which is not good and if you can avoid it – do it.

III. Links – Another Important SEO Item

1. Why Links Are Important

Probably the word that associates best with Web is “links”. That is what hypertext is all about – you link to pages you like and get linked by pages that like your site. Actually, the Web is woven out of interconnected pages and spiders follow the links, when indexing the Web. If not many sites link to you, then it might take ages for search engines to find your site and even if they find you, it is unlikely that you will have high rankings because the quality and quantity of links is part of the algorithms of search engines for calculating relevancy.

2. Inbound and Outbound Links

Put in layman's terms, there are two types of links that are important for SEO – inbound and outbound links. Outbound links are links that start from your site and lead to another one, while inbound links, or backlinks, come from an external site to yours, e.g. if a.com links to mydomain.com, the link from a.com is an inbound link for mydomain.com.
Backlinks are very important because they are supposed to be a measure of the popularity of your site among the Web audience. It is necessary to say that not all backlinks are equal. There are good and bad backlinks. Good backlinks are from reputable places - preferably from sites with a similar theme. These links do boost search engine ranking. Bad backlinks come from suspicious places – like link farms – and are something to be avoided. Well, if you are backlinked without your knowledge and consent, maybe you should drop the Webmaster a line, asking him or her to remove the backlink.
If you are not heavily backlinked, don't worry - buying links is an established practice and if you are serious about getting to the top, you may need to consider it. But before doing this, you should consider some free alternatives. For instance, some of the good places where you can get quality backlinks are Web directories like http://dmoz.org or http://dir.yahoo.com.
First, look for suitable sites to backlink to you using the Backlinks Builder below. After you identify potential backlinks, it's time to contact the Web master of the site and to start negotiating terms. Sometimes you can agree to a barter deal – i.e. a link exchange – they will put on their site N links to your site and you will put on your site N links to their site - but have in mind that this is a bad, risky deal and you should always try to avoid it.

Internal links (i.e. links from one page to another page on the same site) are also important but not as much as backlinks. In this connection it is necessary to say, that using images for links might be prettier but it is a SEO killer. Instead of having buttons for links, use simple text links. Since search engines spider the text on a page, they can't see all the designer miracles, like gradient buttons or flash animations, so when possible, either avoid using them, or provide a meaningful textual description in the <alt> tag, as described next.

3. Anchor text

Anchor text is the most important item in a backlink. While it does matter where a link comes from (i.e. a reputable place or a link farm), what matters more is the actual text the link starts from. Put simply, anchor text is the word(s) that you click on to open the hyperlink – e.g. if we have the best search engine, than “the best search engine” is the anchor text for the hyperlink to google.com. You see that you might have a backlink from a valuable site but if the anchor text is something like “an example of a complete failure”, you will hardly be happy with it.
When you check your backlinks, always check what their anchor text is and if there is a keyword in it. It is a great SEO boost to have a lot of backlinks from quality sites and the anchor text to include our keywords. Check the anchor text of inbound backlinks is with the Backlink Anchor Text Analyzer tool below. Besides the anchor text itself, the text around it is also important.

4. Link Practices That Are To Be Avoided

Similar to keyword stuffing, purchasing links in bulk is a practice to be avoided. It gets suspicious if you bartered 1000 links with another site in a day or two. What is more, search engines keep track of link farms (sites that sell links in bulk) and since bought links are a way to manipulate search results, this practice gets punished by search engines. So avoid dealing with link farms because it can cause more harm than do good. Also, outbound links from your site to known Web spammers or “bad guys” are also to be avoided.
As mentioned, link exchange is not a clean deal. Even if it boosts your ranking, it can have many other negative aspects in the long run. First, you do not know if the other party will keep their promise – i.e. they might remove some of the links to you. Second, they might change the context the link appears into. Third, it is really suspicious if you seem to be “married” to another site and 50% or more of your inbound and outbound links are from/to this direction.
When links are concerned, one aspect to have in mind is the ratio between inbound and outbound links. Generally speaking, if your outbound links are ten times your inbound links, this is bad but it also varies on a case by case basis. If you have a site that links to news sources or has RSS feeds, then having many outbound links is the inevitable price of fresh content.




 

IV. Metatags

A couple of years ago <meta> tags were the primary tool for search engine optimization and there was a direct correlation between what you wrote there and your position in search results. However, algorithms got better and today the importance of metadata is decreasing day by day, especially with Google. But still some search engines show metadata (under the clickable link in search results), so users can read what you have written and if they think it is relevant, they might go to your site. Also, some of the specialized search engines still use the metatags when ranking your site.
The meta Description tag is are one more way for you to write a description of your site, thus pointing search engines to what themes and topics your Web site is relevant to. It does not hurt to include at least a brief description, so don't skip it. For instance, for the dog adoption site, the meta Description tag could be something like this: <Meta Name=“Description” Content=“Adopting a dog saves a life and brings joy to your house. All you need to know when you consider adopting a dog.”>
A potential use of the meta Keywords tags is to include a list of keywords that you think are relevant to your pages. The major search engines will not take this into account but still it is a chance for you to emphasize your target keywords. You may consider including alternative spellings (or even common misspellings of your keywords) in the meta Keywords tag. For instance, if I were to write the meta keywords tag for the dog adoption site, I would do it like that: <Meta name=“Keywords” Content=“adopt, adoption, dog, dogs, puppy, canine, save a life, homeless animals”>. It is a small boost to search engine top ranking but why miss the chance?
The meta Robots tag deserves more attention. In this tag you specify the pages that you do NOT want crawled and indexed. It happens that on your site you have contents that you need to keep there but you don't want it indexed. Listing this pages in the meta Robots tag is one way to exclude them (the other way is by using a robots.txt file and generally this is the better way to do it) from being indexed.

 

V. Content Is King

If you are new to SEO, it might be a surprise for you that text is one of the driving forces to higher rankings. But it is a fact. Search engines (and your readers) love fresh content and providing them with regularly updated, relevant content is a recipe for success. Generally, when a site is frequently updated, this increases the probability that the spider will revisit the site sooner. You can't take for sure that if you update your site daily, the spider will visit it even once a week but if you do not update your contents regularly, this will certainly drop you to from the top of search results.
For company sites that are not focused on writing but on manufacturing constantly adding text can be a problem because generally company sites are not reading rooms or online magazines that update their content daily, weekly or monthly but even for company sites there are reasonable solutions. No matter what your business is, one is for sure – it is always relevant to include a news section on your site – it can be company news or RSS feeds but this will keep the ball rolling.

1. Topical Themes or How to Frequently Add Content to Your Site

If you are doing the SEO for an online magazine, you can consider yourself lucky – fresh content is coming all the time and you just need to occasionally arrange a heading or two or a couple of paragraphs to make the site SEO-friendly. But even if you are doing a SEO for an ordinary company site, it is not all that bad - there are ways to constantly get fresh content that fits into the topic of the site.
One of the intricacies of optimizing a company site is that it has to be serious. Also, if your content smells like advertising and has no practical value for your visitors, this content is not that valuable. For instance, if you are a trade company, you can have promotional texts about your products. But have in mind that these texts must be informational, not just sales hype. And if you have a lot of products to sell, or frequently get new products, or make periodical promotions of particular products and product groups – you can post all this to your site and you will have fresh, topical content.
Also, depending on what your business is about, you can include different kinds of self-updating information like lists of hot new products, featured products, discounted items, even online calculators or order trackers. Unlike promotional pages, this might neither bring you many new visitors, nor improve your ratings but is more than nothing.
One more potential traffic trigger for company sites are news sections. Here you can include news about past and coming events, post reports about various activities, announce new undertakings, etc. Some companies even go further – their CEO keeps a blog, where he or she writes in a more informal style about what is going in the company, in the industry as a whole, or in the world in general. These blogs do attract readers, especially if the information is true, rather than the official story.
An alternative way to get fresh free content are RSS feeds. RSS feeds are gaining more and more popularity and with a little bit of searching, you can get free syndicated content for almost any topic you can think of.

2. Bold and Italic Text

When you have lots of text, the next question is how to make the important items stand out from the crowd – for both humans and search engines. While search engines (and their spiders – the programs that crawl the Web and index pages) cannot read text the way humans do, they do have ways of getting the meaning of a piece of text. Headings are one possibility, bold and italic are another way to emphasize a word or a couple of words that are important. Search engines read the <b> and <i> text and get the idea that what is in bold and/or italic is more important than the rest of the text. But do not use bold and italic too much – this will spoil the effect, rather than make the whole page a search engine favorite.

3. Duplicate Content

When you get new content, there is one important issue – is this content original? Because if it is not, i.e. it is stolen from another site, this will get you into trouble. But even if it is not illegal, i.e. you obtained it for free from an article feed, have in mind that you might not be only one on the Web, who has this particular stuff. If you have the rights to do it, you can change the text a little, so it is not an exact copy of another page and cannot be labeled “duplicate content” by search engines. If you don't manage to escape the duplicate content filter that search engines have imposed recently in their attempts to filter stolen, scrapped, or simply copied contents, your pages could be removed from search results!
Duplicate content became an issue when tricky webmasters started making multiple copies of the same page (under a different name) in order to fool search engines that they have more content than they actually do. As a result of this malpractice, search engines responded with a duplicate content filter that removes suspicious pages. Unfortunately, this filter sometimes removes quite legitimate pages, like product descriptions given from a manufacturer to all its resellers, which must be kept exactly the same.
You see, duplicate content can be a serious problem. But it is not an obstacle that cannot be overcome. First, you need to periodically check the Web for pages that are similar to yours. You can use http://copyscape.com. If you identify pages that are similar to yours (and it is not you who have illegitimately copied them), you could notify the webmaster of the respective site(s) to remove them. Also, you could change a little the text on your site, hoping that this way you will avoid the duplicate content penalty. Even with product descriptions, you can add commentary or opinion on the same page and this could be a way out.
Try the Similar Page Checker to check the similarity between two URLs.

VI. Visual Extras and SEO

As already mentioned, search engines have no means to index directly extras like images, sounds, flash movies, javascript. Instead, they rely on your to provide meaningful textual description and based on it they can index these files. In a sense, the situation is similar to that with text 10 or so years ago – you provide a description in the metatag and search engines uses this description to index and process your page. If technology advances further, one day it might be possible for search engines to index images, movies, etc. but for the time being this is just a dream.

1. Images

Images are an essential part of any Web page and from a designer point of view they are not an extra but a most mandatory item for every site. However, here designers and search engines are on two poles because for search engines every piece of information that is buried in an image is lost. When working with designers, sometimes it takes a while to explain to them that having textual links (with proper anchor text) instead of shining images is not a whim and that clear text navigation is really mandatory. Yes, it can be hard to find the right balance between artistic performance and SEO-friendliness but since even the finest site is lost in cyberspace if it cannot be found by search engines, a compromise to its visual appearance cannot be avoided.
With all that said, the idea is not to skip images at all. Sure, nowadays this is impossible because the result would be a most ugly site. Rather the idea is that images should be used for illustration and decoration, not for navigation or even worse – for displaying text (in a fancy font, for example). And the most important – in the <alt> attribute of the <img> tag, always provide a meaningful textual description of the image. The HTML specification does not require this but search engines do. Also, it does not hurt to give meaningful names to the image files themselves rather than name them image1.jpg, image2.jpg, imageN.jpg. For instance, in the next example the image file has an informative name and the alt provides enough additional information: <img src=“one_month_Jim.jpg” alt=“A picture of Jim when he was a one-month puppy”>. Well, don't go to extremes like writing 20-word <alt> tags for 1 pixel images because this also looks suspicious and starts to smell like keyword-stuffing.

2. Animation and Movies

The situation with animation and movies is similar to that with images – they are valuable from a designer's point of view but are not loved by search engines. For instance, it is still pretty common to have an impressive Flash introduction on the home page. You just cannot imagine what a disadvantage with search engines this is – it is a number one rankings killer! And it gets even worse, if you use Flash to tell a story that can be written in plain text, hence crawled and indexed by search engines. One workaround is to provide search engines with a HTML version of the Flash movie but in this case make sure that you have excluded the original Flash movie from indexing (this is done in the robots.txt file but the explanation of this file is not a beginners topic and that is why it is excluded from this tutorial), otherwise you can be penalized for duplicate content.
There are rumors that Google is building a new search technology that will allow to search inside animation and movies and that the .swf format will contain new metadata that can be used by search engines, but until then, you'd better either refrain from using (too much) Flash, or at least provide a textual description of the movie (you can use an <alt> tag to describe the movie).

3. Frames

It is a good news that frames are slowly but surely disappearing from the Web. 5 or 10 years ago they were an absolute hit with designers but never with search engines. Search engines have difficulties indexing framed pages because the URL of the page is the same, no matter which of the separate frames is open. For search engines this was a shock because actually there were 3 or 4 pages and only one URL, while for search engines 1 URL is 1 page. Of course, search engines can follow the links to the pages in the frameset and index them but this is a hurdle for them.
If you still insist on using frames, make sure that you provide a meaningful description of the site in the <noframes> tag. The following example is not for beginners but even if you do not understand everything in it, just remember that the <noframes> tag is the place to provide an alternative version (or at least a short description) of your site for search engines and users whose browsers do not support frames. If you decide to use the <noframes> tag, maybe you'd like to read more about it before you start using it.
Example: <noframes> <p> This site is best viewed in a browser that supports frames. </p><p> Welcome to our site for prospective dog adopters! Adopting a homeless dog is a most noble deed that will help save the life of the poor creature. </p></noframes>

4. JavaScript

This is another hot potato. It is known by everybody that pure HTML is powerless to make complex sites with a lot of functionality (anyway, HTML was not intended to be a programming languages for building Web applications, so nobody expects that you can use HTML to handle writing to a database or even for storing session information) as required by today's Web users and that is why other programming languages (like JavaScript, or PHP) come to enhance HTML. For now search engines just ignore JavaScript they encounter on a page. As a result of this, first if you have links that are inside the JavaScript code, chances are that they will not be spidered. Second, if JavaScript is in the HTML file itself (rather than in an external .js file that is invoked when necessary) this clutters the html file itself and spiders might just skip it and move to the next site. Just for your information, there is a <noscript> tag that allows to provide alternative to running the script in the browser but because most of its applications are pretty complicated, it is hardly suitable to explain it here.

 

VII. Static Versus Dynamic URLs

Based on the previous section, you might have gotten the impression that the algorithms of search engines try to humiliate every designer effort to make a site gorgeous. Well, it has been explained why search engines do not like image, movies, applets and other extras. Now, you might think that search engines are far too cheeky to dislike dynamic URLs either. Honestly, users are also not in love with URLs like http://domain.com/product.php?cid=1&pid=5 because such URLs do not tell much about the contents of the page.
There are a couple of good reasons why static URLs score better than dynamic URLs. First, dynamic URLs are not always there – i.e. the page is generated on request after the user performs some kind of action (fills a form and submits it or performs a search using the site's search engine). In a sense, such pages are nonexistent for search engines, because they index the Web by crawling it, not by filling in forms.
Second, even if a dynamic page has already been generated by a previous user request and is stored on the server, search engines might just skip it if it has too many question marks and other special symbols in it. Once upon a time search engines did not index dynamic pages at all, while today they do index them but generally slower than they index static pages.
The idea is not to revert to static HTML only. Database-driven sites are great but it will be much better if you serve your pages to the search engines and users in a format they can easily handle. One of the solutions of the dynamic URLs problem is called URL rewriting. There are special tools (different for different platforms and servers) that rewrite URLs in a friendlier format, so they appear in the browser like normal HTML pages. Try the URL Rewriting Tool below, it will convert the cryptic text from the previous example into something more readable, like http://mydomain.com/product-categoryid-1-productid-5.

VIII. Promoting Your Site to Increase Traffic

The main purpose of SEO is to make your site visible to search engines, thus leading to higher rankings in search results pages, which in turn brings more traffic to your site. And having more visitors (and above all buyers) is ultimately the goal in sites promotion. For truth's sake, SEO is only one alternative to promote your site and increase traffic – there are many other online and offline ways to do accomplish the goal of getting high traffic and reaching your target audience. We are not going to explore them in this tutorial but just keep in mind that search engines are not the only way to get visitors to your site, although they seem to be a preferable choice and a relatively easy way to do it.

1. Submitting Your Site to Search Directories, forums and special sites

Importance of directories

Published by Rob at 8:10 am under SEO news
Web directories are among the most important tools used by search engine optimization experts to boost the rankings of their clients. The importance of directories in SEO stems from a variety of reasons but it mostly boils down to directories being good sources of quality links. Search engine optimization focuses mostly on improving a website’s content and it links to make it rank higher in search results page. Directories as sources of quality links are able to satisfy an important requirement for effective SEO.
Background on directories
Before delving deeper to the importance of directories to SEO, it is important that you understand what directories are in the first place. A lot of people confuse directories to search engines but the latter, directories are manned by human editors who evaluate and categorize websites for inclusion in their lists. Moreover, directories do not list website based on a keyword, they lists sites by general category (health, education, etc.) and in some case, a specific subcategory such as by state.
The importance of directories
Directories are essential to SEO because they provide valuable links. This stems largely from the fact that most directories are regarded as credible by search engines. Web directories are a collection of numerous websites, including .edu and .org domains that add to their credibility. As such, a backlink from such directories adds up as a positive vote to your website.
Increased web presence is another reason why directories make for important SEO tools. The fact that your website are listed in a web directory makes you more visible to the spiders or crawlers of search engines, which in turn increases the chances of your website being indexed.
A more direct benefit of joining directories is that they provide an alternative source of traffic, which further adds to your web presence.
Not all are created equal
One thing you must remember about web directories is that not all of them provide equal benefits. There are good and respectable directories and there also bad ones. Needless to say, you must only submit your website to respectable directories and avoid bad ones unless you want to risk wasting your effort or worse, being blacklisted by search engines.
Reliable search engine optimization firms like Blackwood Productions recognize the importance of directories in SEO. Moreover, we assure that our clients’ websites are only listed in good and reputable directories.

After you have finished optimizing your new site, time comes to submit it to search engines. Generally, with search engines you don't have to do anything special in order to get your site included in their indices – they will come and find you. Well, it cannot be said exactly when they will visit your site for the first time and at what intervals they will visit it later but there is hardly anything that you can to do invite them. Sure, you can go to their Submit a Site pages in submit the URL of your new site but by doing this do not expect that they will hop to you right away. What is more, even if you submit your URL, most search engines reserve the right to judge whether to crawl your site or not. Anyway, here are the URLs for submitting pages in the three major search engines: Google, MSN, and Yahoo.
In addition to search engines, you may also want to have your site included in search directories as well. Although search directories also list sites that are relevant to a given topic, they are different from search engines in several aspects. First, search directories are usually maintained by humans and the sites in them are reviewed for relevancy after they have been submitted. Second, search directories do not use crawlers to get URLs, so you need to go to them and submit your site but once you do this, you can stay there forever and no more efforts on your side are necessary. Some of the most popular search directories are DMOZ and Yahoo! (the directory, not the search engine itself) and here are the URLs of their submissions pages: DMOZ and Yahoo!.
Sometimes posting a link to your site in the right forums or special sites can do miracles in terms of traffic. You need to find the forums and sites that are leaders in the fields of interest to you but generally even a simple search in Google or the other major search engines will retrieve their names. For instance, if you are a hardware freak, type “hardware forums” in the search box and in a second you will have a list of sites that are favorites to other hardware freaks. Then you need to check the sites one by one because some of them might not allow posting links to commercial sites. Posting into forums is more time-consuming than submitting to search engines but it could also be pretty rewarding.

2. Specialized Search Engines

Google, Yahoo!, and MSN are not the only search engines on Earth, nor even the only general-purpose ones. There are many other general-purpose and specialized search engines and some of them can be really helpful for reaching your target audience. You just can't imagine for how many niches specialized search engines exist – from law, to radiostations, to educational one! Some of them are actually huge sites that gather Webwide resources on a particular topic but almost all of them have sections for submitting links to external sites of interest. So, after you find the specialized search engines in your niche, go to their site and submit your URL – this could prove more trafficworthy than striving to get to the top of Google.

3. Paid Ads and Submissions

We have already mentioned some other alternatives to search engines – forums, specialized sites and search engines, search directories – but if you need to make sure that your site will be noticed, you can always resort to paid ads and submissions. Yes, paid listings are a fast and guaranteed way to appear in search results and most of the major search engines accept payment to put your URL in the Paid Links section for keywords of interest to you but you also must have in mind that users generally do not trust paid links as much as they do with the normal ones – in a sense it looks like you are bribing the search engine to place you where you can't get on your own, so think twice about the pros and cons of paying to get listed.

Similar Page Checker


Enter First URL

Enter Second URL


http://www.webconfs.com/similar-page-checker.php

How it Works

Search Engines are known to act upon websites that contain Duplicate / Similar content.
Your content could be similar to other websites on the Internet, or pages from within your own website could be similar to each other (usually the case with dynamic product catalog pages).
This tool allows you to determine the percentage of similarity between two pages.
The exact percentage of similarity after with a search engine may penalize you is not known, it varies from search engine to search engine, Your aim should be to keep your page similarity as LOW as possible.
Duplicate Content Filter
This article will help you understand why you might be caught in the filter, and ways to avoid it.


 

Search Engine Spider Simulator


Enter URL to Spider


http://www.webconfs.com/search-engine-spider-simulator.php

How it Works

A lot of Content and Links displayed on a webpage may not actually be visible to the Search Engines, eg. Flash based content,  content generated through javascript,  content displayed as images etc.
This tool Simulates a Search Engine by displaying the contents of a webpage exactly how a Search Engine would see it.
It also displays the hyperlinks that will be followed (crawled) by a Search Engine when it visits the particular webpage.
See Your Site With the Eyes of a Spider
The article explains how Search Engines view a Webpage.

Backlink Anchor Text Analyzer


Domain Name

Note* Results may vary if prefixed with www.
http://www.webconfs.com/anchor-text-analysis.php

How it Works

Quality backlinks is one of the most important factors in Search Engine Optimization.
It is not enough just to have a lot of backlinks, it is the Quality of backlinks along with the Quantity that help you rank better in Search Engines.
A backlink could be considered as a Quality Backlink if
1. It links to your website with the keyword (keyphrase) that you are trying to optimize for.
2. The Theme of the backlinking website is the same as your website.
This tools help you determine the backlinks of your website and link text used by your backlinks to Link to your wesbite.
Importance of Backlinks
This article will explain to you what a backlink is, why they are important, and what you can do to help gain them while avoiding getting into trouble with the Search Engines.

Backlink Builder

Enter Keyword (Theme)


Donate $5 and get listed in our Donations Page
http://www.webconfs.com/backlink-builder.php

How it Works

Building Quality backlinks is one of the most important factors in Search Engine Optimization.
It is not enough just to have a lot of backlinks, it is the Quality of backlinks along with the Quantity that help you rank better in Search Engines.
A backlink could be considered as a Quality Backlink if
1. The Theme of the backlinking website is the same as your website.
2. It links to your website with the keyword (keyphrase) that you are trying to optimize for.
This tools searches for websites of the theme you specify that contain keyphrases like "Add link", "Add site", "Add URL", "Add URL", "Submit URL", "Add Article" etc. Most of the results could be quality potential backlinks.
Importance of Backlinks
This article will explain to you what a backlink is, why they are important, and what you can do to help gain them while avoiding getting into trouble with the Search Engines.

Backlink Summary


Domain Name

Note* Results may vary if prefixed with www.
http://www.webconfs.com/backlink-summary.php

How it Works

This tool will give you a summary of your competitors backlinks.
Importance of Backlinks
This article will explain to you what a backlink is, why they are important, and what you can do to help gain them while avoiding getting into trouble with the Search Engines.

Keyword Density Checker - Keyword Cloud

Enter a URL to analyze


http://www.webconfs.com/keyword-density-checker.php

How it Works

Keyword Cloud is a visual depiction of keywords used on a website, keywords having higer density are depicted in a larger fonts.
Ideally your main keywords should appear in larger fonts at the start of the cloud.

Keyword Density is the percentage of occurrence of your keywords to the text in the rest of your webpage.
It is important for your main keywords to have the correct keyword density to rank well in Search Engines.
This tool will crawl the given URL, extract text as a search engine would, remove common stop words and Analyze the density of the keywords.
How to Avoid SEO over-optimization
This article shows how to avoid SEO over-optimization (and black hat SEO tricks) because intentional or unintentional over-optimization might turn into a real nightmare in terms of search engine ranking or it can even lead to temporary or permanent ban from search engines.

Search Engine Friendly Redirect Checker


Enter the URL whose Redirect you want to check
(Redirect From URL)


http://www.webconfs.com/redirect-check.php

How it Works

A lot of us lose out on valuable search engine traffic due to incorrectly configuring our redirects.
It is very import that when a search engine comes to crawl your website it is able to follow any redirects you have set up.
Suppose you have a website http://www.foo.com and you create a redirect such that whenever any visitor types in the URL http://www.foo.com he is automatically redirected to http://www.foo.com/widgets/, If the Search Engine is not able to follow the redirect it would think that http://www.foo.com has NO contents, http://www.foo.com would end up ranking very badly in search engines.
This tools help you determine if the redirect you have created is Search Engine Friendly.
How to Redirect Article for information on how to create Search Engine Friendly redirects.

Kontera Ads Preview

Enter a URL


Note* The ads appear 1-2 seconds after the URL loads.

http://www.webconfs.com/kontera-preview-tool.php

How it Works

This tool allows you to preview Kontera Ads on your website.
Kontera is considered as one of the good alternatives to google adsense.

Link Price Calculator

This Tool is temporarily Unavailable
sorry for the inconvenience caused...


Website / Domain Name


http://www.webconfs.com/link-value.php

How it Works

This tools will help you determine the approximate amount you should be paying (or charging) per month for a text link (ad) from each and every page of the specified website.
It takes into consideration factors such as number of Backlinks, Alexa traffic rank, age of the website, etc.

Reciprocal Link Check


Your Domain Name


List of URLs where your Reciprocal Links can be found
(One on each line)



http://www.webconfs.com/reciprocal-link-checker.php

How it Works

Reciprocal links have become one of the MOST popular methods of getting backlinks for your website. Although we dont really recommend them, a lot of SEOs have found success with their 2-way / 3-way reciprocal link exchanges.
This tool helps you ensure that your link partners are linking back to your website. It also determines the anchor text used by your link partners to link to your website.
Importance of Backlinks
This article will explain to you what a backlink is, why they are important, and what you can do to help gain them while avoiding getting into trouble with the Search Engines.

Check Yahoo WebRank

Enter one URL on each line of the box below:
Note* Results may vary if prefixed with www.


http://www.webconfs.com/check-yahoo-webrank.php

How it Works

Find out the Yahoo WebRank of your and your competitors websites.

Yahoo WebRank is basically a rank assigned to a URL by Yahoo on a scale of 0-10. It was introduced a couple of months ago as a Beta feature of the Yahoo toolbar, since it was an experimental feature it is no longer available as a part of their toolbar.
You can just search for "Yahoo WebRank" in your favorite SearchEngine to know more about it.

This tool help you evaluate the importance of a URL as perceived by Yahoo

Domain Stats Tool

Enter Domain Name

http://www.webconfs.com/domain-stats.php

How it Works

This tool helps you get all kind of statistics of your competitor's domains.
The statistics include Alexa Taffic Rank, Age of the domains, Yahoo WebRank, Dmoz listings, count of backlinks and number of pages indexed in Search Engines like Google, Yahoo, Msn etc.
It will probably help you figure out why some of your competitors are ranking better than you.
Importance of Backlinks
This article will explain to you what a backlink is, why they are important, and what you can do to help gain them while avoiding getting into trouble with the Search Engines.

Domain Age Tool

Enter one domain on each line of the box below:
http://www.webconfs.com/domain-age.php

How it Works

This tools displays the approximate age of a website on the Internet and allows you to view how the website looked when it first started.
It also helps you find out the age of your competitor's domains, older domains may get a slight edge in Search Engine Rankings.
This tools has been build using Archive's Wayback Machine.
The Age of a Domain Name
This article explains the role of the Age of a domain name in search engine rankings.

Keyword Playground

Enter Keyword

http://www.webconfs.com/keyword-playground.php

How it Works

The basis of ALL search engine rankings is keywords. Users enter keyphrases in Search Engines to get information they need, It is VERY important that you optimize for such keyphrases.
This tool provides you keyword suggestions and reports on their monthly search estimates.
This tools has been build using the WordTracker Database.

Website Keyword Suggestions

Enter Website URL / Domain

http://www.webconfs.com/website-keyword-suggestions.php

How it Works

The basis of ALL search engine rankings is keywords. Users enter keyphrases in Search Engines to get information they need, It is VERY important that you optimize for such keyphrases.
This tools tries to determine the theme of your website and provides keyword suggestions along with keyword traffic estimates.
This tools has been build using the WordTracker Database.

 

URL Rewriting Tool


Enter Dynamic URL


Eg. http://www.widgets.com/product.php?categoryid=1&productid=10
http://www.webconfs.com/url-rewriting-tool.php

How it Works

Static URLs are known to be better than Dynamic URLs because of a number of reasons
  1. Static URLs typically Rank better in Search Engines.
  2. Search Engines are known to index the content of dynamic pages a lot slower compared to static pages.
  3. Static URLs are always more friendlier looking to the End Users.

Example of a dynamic URL
http://www.widgets.com/product.php?categoryid=1&productid=10

This tool helps you convert dynamic URLs into static looking html URLs.

Example of the above dynamic URL Re-written using this tool
http://www.widgets.com/product-categoryid-1-productid-10.htm


Note*
You would need to create a file called ".htaccess" and paste the code generated into it, Once you have created the .htacess file simply copy it into your web directory.
URL rewriting of this type would work ONLY if you are hosted on a Linux Server.
Dynamic URLs vs. Static URLs
This article explains the various challenges of Dynamic URLs vs. Static URLs.

Keyword-Rich Domain Suggestion Tool

Enter Keyword

Choose Your Domain Extensions:
.com 
.net 
.org 
.info 
.biz 
.us 
.name 
.in 
http://www.webconfs.com/keyword-rich-domain-suggestions.php

How it Works

Having a KEYWORD-RICH domain name is an important factor of Search Engine Optimization. Choosing the right domain could BOOST your search engine rankings.
This tools will suggest keyword rich domain names.
This tools has been build using the WordTracker Database.

Website to Country

Website / Domain Name

http://www.webconfs.com/website-to-country.php

How it Works

Many people wonder why their websites dont rank well in Country specific Search Engines, most of the Search Engines (including Google) determines country of the website based on the physical location of the website's IP Address. You would probably rank well in a Country Specific Search Engine if your website is hosted in the same country.

This tools helps determine the Country in which the specified website is Hosted.
Bookmarklet : Website to Country
*Drag the above link to your browser's Links Toolbar.
*While viewing a website, click on the bookmarklet in your links toolbar to determine to Country in which the website is located.

Ranking in Country Specific Search Engines
This article explains how your website could Rank better in Country Specific Search Engines.

Alexa Ranking Tool

This Tool is temporarily Unavailable
sorry for the inconvenience caused...

Find out how your web site traffic ranks up against all your competitors!

Remember: "The lower the Alexa ranking number the more heavily visited the site.".

Some examples are:
yahoo.com (rank 1); google.com (rank 3); webconfs.com (rank 11,000).

Enter one domain on each line of the box below:


Similar Tool : Domain Stats Tool
This tool helps you get all kinds of statistics of your competitor's domains. The statistics include Yahoo WebRank, Pages Indexed by various Searcb Engines, Backlink count, Alexa Taffic Rank, and Age of the domains.

HTTP / HTTPS Header Check


Enter the URL whose headers you want to view

http://www.webconfs.com/http-header-check.php

How it Works

This tools allow you to inspect the HTTP headers that the web server returns when requesting a URL. Works with HTTP and HTTPS URLs.
Bookmarklet : Webconf's HTTP Header Check
*Drag the above link to your browser's Links Toolbar.
*While viewing a website, click on the bookmarklet in your links toolbar to view the HTTP Headers returned by the URL

Advanced Domain Whois Lookup Tool

Domain Name / IP Address


Bookmarklet : Webconf's Whois Lookup
*Drag the above link to your browser's Links Toolbar.
*While viewing a website, click on the bookmarklet in your links toolbar to discover the whois details for its domain.

 

 

IP to City

IP Address

http://www.webconfs.com/ip-to-city.php

How it Works

This tools helps you determine the Country, City, Latitude and Longitude of an IP Address.
Ranking in Country Specific Search Engines
This article explains how your website could Rank better in Country Specific Search Engines.

File Search Engine

File Name

http://www.webconfs.com/file-search-engine.php

How it Works

This tools helps you locate particular files on the internet.
You can enter an exact filename or a partial filename, you will be diplayed a list of URL's through which you can download the specified file.

 

 

 

Web Page Screen Resolution Simulator

Simulate your web page in different screen resolutions.

Select Resolution
160x160 Pixels
320x320 Pixels
640x480 Pixels
800x600 Pixels
1024x768 Pixels
1152x864 Pixels
1600x1200 Pixels
Enter URL

SEO Bookmarklets

*Drag the Bookmarklets to your browser's Links Toolbar.
*While viewing a website, click on the bookmarklet in your links toolbar.

Bookmarklet : Webconf's Spider Simulator
Displays the text & links that the Search Engine would see when it crawls the page you are visiting.


Bookmarklet : Webconf's Keyword Density Checker
Will crawl the URL you are visiting, extract text as a search engine would, remove common stopwords and Analyze the density of the keywords.


Bookmarklet : Webconf's HTTP Header Check
Inspect the HTTP headers that the web server returns when requesting a page/file.


Bookmarklet : Webconf's Robots.txt Viewer
Displays the robots.txt file for the site you are visiting.


Bookmarklet : Webconf's Whois Lookup
Displays domain whois information of the site you are visiting.


Bookmarklet : Website to Country
Determines the Country of the website you are visiting.


Bookmarklet : Webconfs - Window Resize 640x480
Simulate your web page in 800x600 resolution

Bookmarklet : Webconfs - Window Resize 800x600
Simulate your web page in 800x600 resolution

Bookmarklet : Webconfs - Window Resize 320x320
Simulate your web page in 1024x768 resolution









SEO Articles

Top 10 SEO Mistakes

1. Targetting the wrong keywords

This is a mistake many people make and what is worse – even experienced SEO experts make it. People choose keywords that in their mind are descriptive of their website but the average users just may not search them. For instance, if you have a relationship site, you might discover that “relationship guide” does not work for you, even though it has the “relationship” keyword, while “dating advice” works like a charm. Choosing the right keywords can make or break your SEO campaign. Even if you are very resourceful, you can't think on your own of all the great keywords but a good keyword suggestion tool, for instance, the Website Keyword Suggestion tool will help you find keywords that are good for your site.

2. Ignoring the Title tag

Leaving the <title> tag empty is also very common. This is one of the most important places to have a keyword, because not only does it help you in optimization but the text in your <title> tag shows in the search results as your page title.

3. A Flash website without a html alternative

Flash might be attractive but not to search engines and users. If you really insist that your site is Flash-based and you want search engines to love it, provide an html version. Here are some more tips for optimizing Flash sites. Search engines don't like Flash sites for a reason – a spider can't read Flash content and therefore can't index it.

4. JavaScript Menus

Using JavaScript for navigation is not bad as long as you understand that search engines do not read JavaScript and build your web pages accordingly. So if you have JavaScript menus you can't do without, you should consider build a sitemap (or putting the links in a noscript tag) so that all your links will be crawlable.

5. Lack of consistency and maintenance

Our friend Rob from Blackwood Productions often encounters clients, who believe that once you optimize a site, it is done foreve. If you want to be successful, you need to permanently optimize your site, keep an eye on the competition and – changes in the ranking algorithms of search engines.

6. Concentrating too much on meta tags

A lot of people seem to think SEO is about getting your meta keywords and description correct! In fact, meta tags are becoming (if not already) a thing of the past. You can create your meta keywords and descriptions but don't except to rank well only because of this.

7. Using only Images for Headings

Many people think that an image looks better than text for headings and menus. Yes, an image can make your site look more distinctive but in terms of SEO images for headings and menus are a big mistake because h2, h2, etc. tags and menu links are important SEO items. If you are afraid that your h1 h2, etc. tags look horrible, try modifying them in a stylesheet or consider this approach: http://www.stopdesign.com/articles/replace_text.

8. Ignoring URLs

Many people underestimate how important a good URL is. Dynamic page names are still very frequent and no keywords in the URL is more a rule than an exception. Yes, it is possible to rank high even without keywords in the URL but all being equal, if you have keywords in the URL (the domain itself, or file names, which are part of the URL), this gives you additional advantage over your competitors. Keywords in URLs are more important for MSN and Yahoo! but even with Google their relative weight is high, so there is no excuse for having keywordless URLs.

9. Backlink spamming

It is a common delusion that it more backlinks are ALWAYS better and because of this web masters resort to link farms, forum/newgroup spam etc., which ultimately could lead to getting their site banned. In fact, what you need are quality backlinks. Here are some more information on The Importance of Backlinks

10. Lack of keywords in the content

Once you focus on your keywords, modify your content and put the keywords wherever it makes sense. It is even better to make them bold or highlight them.

Link Exchange overview
Link exchange is a marketing technique in which two or more websites offering similar services exchange text or banner links. It is considered the most effective form of search engine optimization (SEO) because it improves a site’s ranking with the least effort and cost. If you have an online business, regardless of the size, link exchange is certainly one of the best ways to promote your company.

  Link popularity

Most search engines rank websites in terms of link popularity. The more inbound links there are to your site, the better your chances of appearing on the search page. A simple way to achieve this is to post links on blogs, forums, and other free sites, but that takes a lot of time and effort. With link exchange, you can get dozens to thousands of inbound links without having to set up each one.

  Better Traffic

Each inbound link acts as a door to your site. Post your link in an online forum or within an article, and people are bound to follow it. The more visitors you have, the better your chances of selling your product or service. And it’s not just the number of site hits: because your links appear in relevant pages, you can be sure your visitors are interested in your business in the first place.

  Free vs. Paid

Link exchange is usually free when you negotiate directly with other site owners.  But when you use a link exchange program, there is usually a monthly or one-time fee depending on the contract. Companies such as Free Relevant Links offer basic link exchange for free, but provide additional benefits to paying customers. These include more SEO strategies, search engine placement, and ranking reports.

Types of link exchange

There are different ways to trade links with other sites. Here are some of them.

Reciprocal: This is the simplest and most popular form of link exchange. It involves two websites exchanging inbound links—Site 1 posts a link to Site 2, and Site 2 places a link to Site 1. It is also called a mutual or two-way link exchange.

Non-reciprocal: This is a one-way system in which a site links to another without getting any inbound links in return. The site owner usually buys the links from another site good traffic and PR. For example, if Site 1 enjoys excellent traffic, Site 2 can pay Site 1 to post links to improve its own rankings. Non-reciprocal links are more valuable than reciprocal.
Indirect non-reciprocal: This method requires at least three websites and is commonly used by those running multiple sites. Basically, Site 1 links to Site 2, and Site 2 links to Site 3. Sites 1 and 3 are owned by the same person. This way, the links given are non-reciprocal, so they are given more importance by search engines.

 

Using background-image to replace text

Please note: The original technique (FIR) described in the body of this article is no longer recommended for use, as it makes the hidden text completely inaccessible for certain screen readers. Instead, see one of the alternative techniques mentioned at the end of the article under “Important Notes“.
This tutorial assumes a base-level knowledge of CSS, but not much more. Beyond that, it also assumes care will be taken to use these methods fairly and responsibly with well-structured markup.

Introduction

Do you still crave the typographic control of creating headlines and decorative type with images instead of pure HTML text? Even with all the options we have for styling text with CSS, sometimes there’s just nothing that beats the indulgence of opening up Adobe Photoshop, then setting type in your favorite font at just the right size, kerning, and tracking. You know if you save it as an image and place it on a webpage, anyone with an image-enabled browser will see your typographic mastery just as you intended. Right?
But we’ve been beaten over the head so many times with preaching that claims using images for Web type will send us sliding straight to hell. So much so, we’ve dropped our visual standards and given up on the idea that type on the Web can ever be beautiful again. That is, until CSS font downloading is perfected and reliable many years from now.
We’ve been taught images aren’t as accessible as pure marked-up HTML text. This is especially true for assistive browsing software and small-screen devices. So we feel guilty using images for type. Images don’t show up in text-only browsers like Lynx, nor in browsers where users disable image rendering. Even if we’re responsible, always including equivalent alt attributes for each image tag, search robots often index meta data (like alt and title) differently than pure HTML text. This is understandable if we consider the logical importance of heading text when it’s appropriately marked up inside <hn></hn> tags. We won’t even touch on file size problems and download times caused by excessive image use.
Let’s put all that knowledge on hold for a moment. Images aren’t that bad, are they? With a few simple style tricks and a little cautious planning and testing, type on the Web can be guiltlessly beautiful and equally accessible (see notes below) and perfectly indexable, all at the same time. It’s time to spread those wings again.

The Concept

In principle, the concept is very simple. We write a short string of text (eg. “Hello world!”) and surround it with two sets of basic HTML tags. Then we use CSS to hide the HTML text and display a background image containing the exact same words instead of the original text. That’s it. Replacing text with an image is no more complicated than this.
Before we write any CSS, let’s start with some basic markup. Imagine we have a simple HTML snippet like the following:
 
<div>
  <span>Hello world!</span>
</div>
Of course we could apply style directly to that text. But we want more drama and finesse than any font-family or text-transform property can ever bring us. We want flourish. After all, if we’re saying “Hello!” to the world, we might as well do it with panache, right?
We pick out the perfect typeface for our worldly salutation: Shelley Allegro. A well-known script face in the design world, we discover Shelley has just the right flair to win the hearts of millions as we say Hello. We guess this font may only be available on 1.65% of the computer systems out there. So we take the time to create an image representing the same Hello message. The image will display our message in Shelley Allegro in every image-enabled browser, regardless of whether the font is installed on that system:
Hello world!
We take note of the image height, (35 pixels) since it will be useful a bit later.
So we have some HTML, a glorious text message, and one extraneous image. What do we do with them? Let’s roll up our sleeves and use a little style to replace the text with our new image.

Fahrner Image Replacement (FIR)

This method for replacing text with an image is named for Todd Fahrner, one of the persons originally credited with the idea. You may wonder why there are two sets of tags surrounding our Hello message in the HTML above. A div and a span. Technically the two tags could be anything. In fact, your custom solution may require something a bit more semantic. But we’ll choose these two generic wrappers for this example.
The CSS which executes the swap consists of two simple rules. The first uses background properties to pull our image into the background of the div:
 
div {
  background-image:url("hello_world.gif");
  background-repeat:no-repeat;
  height:35px;
  }
Note the height property in that rule. It matches the actual height of our image, ensuring the div is tall enough to display all of our image, yet takes up no more height than necessary. The background-repeat property ensures the image only appears once, instead of tiling across the width of our browser window.
The only remaining task is hiding the raw text which we left in our HTML. This is where span comes in. We need a second element so we can address it separately. Hiding the text is easy:
 
span {display:none;}
Combine those two rules with the HTML we wrote earlier, and we end up with a simple example. So simple, we wonder why it’s taken all this text so far to explain it?
Of course, we’d never leave our markup that simple. Nor would we likely be able to continue using such elementary style rules. Otherwise, every div we use would contain our “Hello world!” background image, and anything we place inside <span></span> tags would magically disappear.
Let’s move on to a few real-world examples.

Example 1: Page Titles

One example of text-replacement is readily available on many of the main pages of stopdesign.com. Notice the primary titles for each section or page (ie.: the words “Recent Log Entries” on the front page) They aren’t created by clever font-styling rules in the CSS. Those are images. Lovingly crafted to match the typeface of the logo. Matted to appropriate colors matching the background on which each is placed. It’s a subtle effect, but one that’s part of this site’s identity.
If we view the source, or toggle the CSS off, we see the title image is not part of the markup for that page. In fact, where most of us probably see an image, the markup uses simple <h1>’s and pure HTML text to label that page and proclaim the title’s prominence and hierarchy in the document structure.
Screen readers, small-screen devices, and indexing robots  should* ignore any screen stylesheets, getting the pure text marked up as a simple <h1>.
One method of matching images to specific pages is to base the title image on the section in which it lives. Each body could be given an id or class unique to that section. Through the use of descendant selectors, each <h1> could be tied to an appropriate image based on the class of the body that contains it. But each section may contain more than one type of page, and we may want a more appropriate title for subpages of each section.
Instead, it may be wiser to assign each title a unique id matching or abbreviating the words it represents. For example, the Recent Log Entries title on stopdesign’s front page is assigned an id of “t-reclog“. The “t-” prefix is added to help create values which won’t be accidentally repeated in other element id values. “t-” always stands for title in this case. This id makes the markup a little redundant, but allows for the greatest flexibility in assigning any title image to any page.
Each replaced title needs a couple simple style properties assigned to it which are common to all replaced titles. In addition to the id, each title which needs to be “swapped” with an image is given a class of “swap“. The swap class exists so the same properties can be applied to all replaced titles without having to repeat those common properties in every unique title id rule. Alternatively, it avoids the need to specify every unique id appearing throughout the site just to create one rule of common properties. The application of the swap class ensures only those <h1>’s possessing class="swap" get setup for replacement. Other non-replaced <h1>’s can exist elsewhere on the site without needing to create rules which override the swap rules. The class addition is a small sacrifice in markup purity for huge simplification gains in the CSS.
For replaced titles on stopdesign.com, the common CSS looks like:
 
h1.swap {
  height:22px;
  background-repeat:no-repeat;
  }
h1.swap span {display:none;}
And unique id rules look like:
 
h1#t-recentlog {background-image:url("/img/title_reclog.gif");}
h1#t-articles {background-image:url("/img/title_articles.gif");}
h1#t-portfolio {background-image:url("/img/title_port.gif");}
For ease of editing and maintenance, all page title rules  were separated into a separate titles.css file, which is imported by a linked master screen.css file.

Example 2: Controlled Drop Caps

Ever wanted a special drop cap to decorate the first letter of a paragraph? Yet didn’t like the clunky form created by bumping that HTML letter up to 500% of normal font-size? What about using an image instead? Of course we wouldn’t want to change the way that paragraph were read or presented should the image not be displayed. If we employ a slight variation of the replace method demonstrated above, we can use almost any type of drop cap desired. For example, let’s once again call on our familiar typeface, Shelley Allegro, to create an “E” which starts this same paragraph:
E
We won’t want the first letter of the paragraph to be separated on its own line (as a div would do). In fact, let’s say we want absolutely no special emphasis to be added when stylesheets are not used to view or read this paragraph. In this case, we’ll use two sets of generic span’s to markup the first letter of our paragraph:
 
<p><span class="dropcap"><span>E</span></span>ver wanted a ...
Remember the outer element is what we use to apply the background image. The inner element is what we use to hide the raw HTML text. In this example, we’ll also be floating the outer element so the rest of the paragraph wraps around our drop cap. We’ll also set the display property to “block” to ensure compatibility with the widest range of browsers (even though use of float should do this automatically). We create the following CSS:
 
span.dropcap {
  display:block;
  float:left;
  width:46px;
  height:63px;
  margin-right:5px;
  background-image:url("dropcap_e.gif");
  background-repeat:no-repeat;
  }
span.dropcap span {display:none;}
The width and height in the first rule above are taken from the image dimensions. We’ve also applied a small right-side margin to pad our drop cap image. Combine the HTML and CSS, apply a small amount of style to the paragraph itself, and we have a simple drop cap example.

More Examples

Creative uses for text replacement are only limited by our imagination. Other possibilities might include:
  • Type-based logos and names
  • Site headers
  • Pull-quotes
  • Single-word substitution for cosmetic effect (like “and” or “vs.”)
The method may also be useful for changing themes with alternate stylesheets. Just as you could change colors and styling, an entirely different image could be used for each theme.

Responsible Replacement

This replacement method needs to be exercised with a certain amount of responsibility. Care should be taken in matching the text of each replacing image with the same raw text in the HTML. Otherwise, it wouldn’t be fair to give visitors seeing full-blown stylesheets different textual content than visitors seeing, hearing, or feeling content without the same stylesheet applied. The replacing image can alter type characteristics such as size, color, capitalization, or tightened word-spacing. But those stylistic decisions are made as part of the design process, and should not be applied to the HTML text itself. For example, sandwiching words together (removal of word-spacing, as in the stopdesign.com title images) in the HTML would create unreadable gibberish for screen readers and braille clients. Thus, the HTML text should be spaced, capitalized, and spelled appropriately as if it were completely unstyled in the first place.
Several very important caveats and disadvantages of this method need to be mentioned:
First, although the method is better [than only including raw images] for search indexing robots, it hinders the ability to find the replaced text on the page when the user executes a “Find on this page” search, or when the user attempts to copy and paste the text.
Second, although it may be rare that images are disabled in a browser, but CSS remains enabled, in these cases, CSS will still hide the text, yet may be unable to display the images, resulting in an unintended blank space with no text showing at all. As stated, these cases should be rare: when one is disabled or not available, the other is often the same.
Third, image text is not overridable by the user: they will not be able to resize the text, nor alter its color or contrast to make it more legible (as is possible with raw text).
These disadvantages need to be carefully considered in determining if this method is acceptable in specific instances. If this method is employed, the caveats should also be taken into consideration as the images used are being designed. Small or low-contrast text within the image would be irresponsible. Color-blindness may be a huge issue to consider when choosing colors and values for the text and/or background.
And remember, too much (or incorrect use) of a good thing can always come back to bite and betray. When we experiment with this method, we use it sparingly and with much caution.

Browser Compatibility

Mac: Camino .7+, IE 5+, Mozilla, Netscape 6+, OmniWeb 4+, Opera 5+, Safari
Win: Firebird .6+, IE 5+, Mozilla, Netscape 6+, Opera 5+, Phoenix .5+

Important Notes

This method has been tested and proven to fail in several popular screen readers. See Joe Clark’s well-researched, thoughtfully-written piece at A List Apart: Facts and Opinion About Fahrner Image Replacement.
The findings Joe presents in his article obviously defeat the original intentions of the method itself: creation of a more flexible and accessible solution than a simple <img> with alt text. The flaw with this method is that it assumes text hidden using display:none gets hidden in visual browsers, but will still get read aloud by a screen reader. This is not the case in several screen readers, even if we specify the “screen” media type for our style sheet. Screen readers also pay attention to the screen media type, since they literally read from what’s displayed on screen. Most screen readers don’t support the “aural” media type, so there’s no benefit to specifying speak properties in an aural style sheet.
Some have suggested the use of visibility:hidden; instead of display:none;, but this also prevents the text from being read in most of the same screen readers. Current versions of JAWS — arguably the most widely used screen reader — will read text hidden with FIR. But judging from behavior of other screen readers, we should not rely on this being the case with future versions of JAWS.
As is true with any content-altering technique, the advantages and disadvantages should be carefully considered for each unique case before being implemented. Since this article was written, several alternative methods have surfaced, each with their own advantages. However, no methods have emerged as the “Holy Grail” of text/image replacement. Existing alternatives to FIR are listed below:
Leahy/Langridge Image Replacement (LIR)
This method eliminates the span by setting height of the parent element to 0 and overflow to hidden, which hides the text. Then it uses top padding to force the element to the height of the image in the background. Conceived at similar times by Seamus Leahy and Stuart Langridge.

Rundle’s Text-Indent Method
Mike Rundle devised a simple method of using the CSS text-indent property to shift contained text outside the visible element window.

Cover-up Method
Another method devised by both Petr Stanicek (a.k.a. “Pixy”) and Tom Gilder uses an empty span element to position a background image on top of the text, allowing the text to show up when images are turned off (or don’t load) in the browser.

 

SEO vs. SEO 2.0 Comparison

SEO     

SEO 2.0     

Un-Natural Linking: Gain backlinking by submitting to directories, link buying, and manually adding links from requests. Optimized for links.
Natural Linking: Gaining links through socializing, blogs, forums, automatic linking with our seo 2.o search exchange community. Optimized for traffic and sales conversions.
Quantity: by keyword stuffing and repetitive titles and descriptions, Designed for the search engine spiders in mind. Optimized for keywords
Quality: by making things completely unique using LSI content structure with no keyword stuffing. Designed with not only the search engines but focusing on the human viewer. Optimized for tags.
Competition: webmaster fight each other trying to gain the best advantages to gain top 10 positions
Cooperation: Web master help each other by linking to one another and build into a strong community so they each get better rankings.
Introverted: We’re not doing SEO, no we can’t show you our client lists so don’t ask, secretive SEO companies
Extraverted: Welcome our newest client “company X”, we are glad they decided to join our family.
Optimization: clicks, pageviews, visits
Innovation: sales conversions, ROI, company branding
Link Structure: Inbound links to the home page only. Links from the home page to the interior pages.
Link Infrastructure: Inbound links to all pages. Links form interior pages out to the home page.
Non-Authoritative: Building your site from the top down. Putting all your emphasis on your home page.
Authoritative: Building your site from the bottom up. Making each page just as important as the home page.


On-Page Optimization: Cleaning up code, adding keywords, and writing content.
Off-Page Optimization: Gaining links, joining networks, social bookmarking, and exchanging links.


Choosing SEO as Your Career

Its always better to know in advance what you can expect from a career in SEO.

Some Good Reasons to Choose SEO as Your Career

1High demand for SEO services

Once SEO was not a separate profession - Web masters performed some basic SEO for the sites they managed and that was all. But as sites began to grow and make money, it became more reasonable to hire a dedicated SEO specialist than to have the Web master do it. The demand for good SEO experts is high and is constantly on the rise.

2A LOT of people have made a successful SEO career

There are many living proofs that SEO is a viable business. The list is too long to be quoted here but some of the names include Rob from Blackwood Productions, Jill Wahlen from High Rankings, Rand Fishkin from SEO Moz and many others.

3Search Engine Optimizers make Good Money !

SEO is a profession that can be practiced while working for a company or as a solo practitioner. There are many jobboards like Dice and Craigslist that publish SEO job advertisements. It is worth noting that the compensation for SEO employees is equal to or even higher than that of developers, designers and marketers. Salaries over $80K per annum are not an exception for SEO jobs.
As a solo SEO practitioner you can make even more money. Almost all freelance sites have sections for SEO services and offers for $50 an hour or more are quite common. If you are still not confident that you can work on your own, you can start a SEO job, learn a bit and then start your own company.
If you already feel confident that you know a lot about SEO, you can take this quiz and see how you score. Well, don't get depressed if you didn't pass - here is a great checklist that will teach you a lot, even if you are already familiar with SEO.

4Only Web-Designing MAY NOT be enough

Many companies offer turn-key solutions that include Web design, Web development AND SEO optimization. In fact, many clients expect that when they hire somebody to make their site, the site will be SEO friendly, so if you are good both as a designer and a SEO expert, you will be a truely valuable professional.
On the other hand, many other companies are dealing with SEO only because they feel that this way they can concentrate their efforts on their major strength – SEO, so you can consider this possibility as well.

5Logical step ahead if you come from marketing or advertising

The Web has changed the way companies do business, so to some extent today's marketers and advertisers need to have at least some SEO knowledge if they want to be successful. SEO is also a great career for linguists.

6Lots of Learning

For somebody who comes from design, development or web administration, SEO might look not technical enough and you might feel that you will downgrade if you move to SEO. Don't worry so much - you can learn a LOT from SEO, so if you are a talented techie, you are not downgrading but you are actually upgrading your skills packages.

7SEO is already recognized as a career

Finally, if you need some more proof that SEO is a great career, have a look at the available courses and exams for SEO practitioners. Well, they might not be a CISCO certification but still they help to institutionalize the SEO profession.

Some Ugly Aspects of SEO

1Dependent on search engines

It is true that in any career there are many things that are outside of your control but for SEO this is a rule number one. Search engines frequently change their algorithms and what is worse – these changes are not made public, so even the greatest SEO gurus admit that they make a lot of educated guesses about how things work. It is very discouraging to make everything perfect and then to learn that due to a change in the algorithm, your sites dropped 100 positions down. But the worst part is that you need to communicate this to clients, who are not satisfied with their sinking ratings.

2No fixed rules

Probably this will change over time but for now the rule is that there are no rules – or at least not written ones. You can work very hard, follow everything that looks like a rule and still success is not coming. Currently you can't even rely on bringing a search engine to court because of the damages they have done to your business because search engines are not obliged to rank high sites that have made efforts to get optimized.

3Rapid changes in rankings

But even if you somehow manage to get to the top for a particular keyword, keeping the position requires constant efforts. Well, many other businesses are like that, so this is hardly a reason to complain – except when an angry customer starts shouting at you that this week their ratings are sinking and of course this is all your fault.

4SEO requires Patience

The SEO professional and customers both need to understand that SEO takes constant effort and time. It could take months to move ahead in the ratings, or to build tens of links. Additionally, if you stop optimizing for some time, most likely you will experience a considerable drop in ratings. You need lots of motivation and patience not to give up when things are not going your way.

5Black hat SEO

Black hat SEO is probably one of the biggest concerns for the would-be SEO practitioner. Fraud and unfair competition are present in any industry and those who are good and ethical suffer from this but black hat SEO is still pretty widespread. It is true that search engines penalize black hat practices but still black hat SEO is a major concern for the industry.

So, let's hope that by telling you about the pros and cons of choosing SEO as your career we have helped you make an informed decision about your future.

 

How to get Traffic from Social Bookmarking sites

Sites like digg.com, reddit.com, stumbleupon.com etc can bring you a LOT of traffic. How about getting 20,000 and more visitors a day when your listing hits the front page?
Getting to the front page of these sites is not as difficult as it seems. I have been successful with digg and del.icio.us (and not so much with Reddit though the same steps should apply to it as well) multiple times and have thus compiled a list of steps that have helped me succeed:

1Pay attention to your Headlines

Many great articles go unnoticed on social bookmarking sites because their headline is not catchy enough. Your headline is the first (and very often the only) thing users will see from your article, so if you don't make the effort to provide a catchy headline, your chances of getting to the front page are small.
Here are some examples to start with :-

Original headline : The Two Types of Cognition
Modified Headline : Learn to Understand Your Own Intelligence

Original headline: Neat way to organize and find anything in your purse instantly!
Modified Headline : How to Instantly Find Anything in Your Purse

Here is a good blog post that should help you with your headlines.

2Write a meaningful & short description

The headline is very important to draw attention but if you want to keep that attention, a meaningful description is vital. The description must be slightly provocative because this draws more attention but still, never use lies and false facts to provoke interest. For instance, if your write “This article will reveal to you the 10 sure ways to deal with stress once and forever and live like a king from now on.” visitors will hardly think that your story is true and facts-based.

You also might be tempted to use a long tell-it-all paragraph to describe your great masterpiece but have in mind that many users will not bother to read anything over 100-150 characters. Additionally, some of the social bookmarking sites limit descriptions, so you'd better think in advance how to describe your article as briefly as possible.

3Have a great first paragraph

This is a rule that is always true but for successful social bookmarking it is even more important. If you have successfully passed Level 1 (headlines) and Level 2 (description) in the Catch the User's Attraction game, don't let a bad first paragraph make them leave your site.

4Content is king

However, the first paragraph is not everything. Going further along the chain of drawing (and retaining) users' attention, we reach the Content is King Level. If your articles are just trash, bookmarking them is useless. You might cheat users once but don't count on repetitive visits. What is more, you can get your site banned from social bookmarking sites, when you persistently post junk.

5Make it easy for others to vote / bookmark your site

It is best when other people, not you, bookmark your site. Therefore, you must make your best to make it easier for them to do it. You can put a bookmarking button at the end of the article, so if users like your content, they can easily post it. If you are using a CMS, check if there is an extension that allows to add Digg, Del.icio.us, and other buttons but if you are using static HTML, you can always go to the social bookmarking site and copy the code that will add their button to your pages.
Here is a link that should help you add Links for Del.icio.us, Digg, and More to your pages.

6Know when to submit

The time when you submit can be crucial for your attempts to get to the front page. On most social bookmarking sites you have only 24 hours to get to the front page and stay there. So, if you post when most users (and especially your supporters) are still sleeping, you are wasting valuable time. By the time they get up, you might have gone to the tenth page. You'd better try it for yourself and see if it works for you but generally posting earlier than 10 a.m. US Central Time is not good. Many people say that they get more traffic around 3 p.m. US Central Time. Also, workdays are generally better in terms of traffic but the downside is that you have more competitors for the front page than on weekends.

7Submit to the right category

Sometimes a site might not work for you because there is no right category for you. Or because you don't submit to the right category – technology, health, whatever – but to categories like General, Miscellaneous, etc. where all unclassified stuff goes. And since these categories fill very fast, your chance to get noticed decreases.

8Build a top-profile

Not all users are equal on social bookmarking sites. If you are an old and respected user who has posted tons of interesting stuff, this increases the probability that what you submit will get noticed. Posting links to interesting articles on other sites is vital for building a top-profile. Additionally, it is suspicious, when your profile has links to only one site. Many social bookmarking sites frown when users submit their own content because this feels like self-promotion.

9Cooperate with other social bookmarkers

The Lonely Wolf is a suicidal strategy on sites like StubleUpon, Digg, Netscape. Many stories make it to the front page not only because they are great but because they are backed up by your network of friends. If in the first hours after your submittal you get at least 15 votes from your friends and supporters, it is more likely that other users will vote for you. 50 votes can get you to the top page of Digg.

10Submit in English

Linguistic diversity is great but the majority of users are from English-speaking countries and they don't understand exotic languages. So, for most of the social bookmarking sites submitting anything in a language different from English is not recommendable. The languages that are at an especial disadvantage are Chinese, Arabic, Slavic languages and all the other that use non-latin alphabet. German, Spanish, French are more understandable but still they are not English. If you really must submit your story (i.e. because you need the backlink), include an English translation at least of the title. But the best way to proceed with non-English stories is to post them on where they belong. Check this link for a list of non-English sites.

11Never submit old news

Submitting old news will not help you in becoming a respected user. Yesterday's news is history. But if you still need to submit old stuff, consider feature articles, howtos and similar pieces that are up-to-date for a long time.

12Check your facts

You must be flattered that users read your postings but you will hardly be flattered when users prove that you haven't got the facts right. In addition to sarcastic comments, you might also receive negative votes for your story, so if you want to avoid this, check you facts - or your readers will do it.

13Check you spelling

Some sites do not allow to edit your posts later, so if you misspell the title, the URL, or a keyword, it will stay this way forever.

14Not all topics do well

But sometimes even great content and submitting to the right category do not push you to the top. One possible reason could be that your stories are about unpopular topics. Many sites have topics that their users love and topics that don't sell that well. For instance, Apple sells well on Digg and The War in Iraq on Netscape. Negative stories - about George Bush, Microsoft, evil multinational companies, corruption and crime also have a chance to make it to the front page. You can't know these things in advance but some research on how many stories tagged with keywords like yours have made the front page in the last year or so can give you a clue.

15Have Related Articles / Popular Articles

Traffic gurus joke that traffic from social bookmarking sites is like an invasion – the crowds pour in and in a day or two they are gone. Unfortunately this is true – after your listing rolls from the front page (provided that you reached the front page), the drop in traffic is considerable. Besides, many users come just following the link to your article, have a look at it and then they are gone. One of the ways to keep them longer on your site is to have links to Related Articles / Popular Articles or something similar that can draw their attention to other stuff on the site and make them read more than one article.

16RSS feeds, newsletter subscriptions, affiliate marketing

RSS feeds, newsletter subscriptions, affiliate marketing are all areas in which the traffic from social bookmarking sites can help you a lot. Many people who come to your site and like it, will subscribe to RSS feeds and/or your newsletter. So, you need to put these in visible places and then you will be astonished at the number of new subscriptions you got on the day when you were on the front page of a major social bookmarking site.

17Do not use automated submitters

After some time of active social bookmarking, you will discover that you are spending hours on end posting links. Yes, this is a lot of time and using automated submitters might look like the solution but it isn't. Automated submitters often have malware in them or are used for stealing passwords, so unless you don't care about the fate of your profile and don't mind being banned, automated submitters are not the way to go.

18Respond to comments on your stories

Social bookmarking sites are not a newsgroup but interesting articles can trigger a pretty heated discussion with hundreds of comments. If your article gets comments, you must be proud. Always respond to commends on your stories and even better – post comments on other stories you find interesting. This is a way to make friends and to create a top-profile.

19Prepare your server for the expected traffic

This is hardly a point of minor importance but we take for granted that you are hosting your site on a reliable server that does not crash twice a day. But have in mind that your presence on the front page of a major social bookmarking site can drive you a lot traffic, which can cause your server to crash – literally!
I remember one of the times I was on the front page on Digg, I kept restarting Apache on my dedicated server because it was unable to cope with the massive traffic. I have many tools on my site and when the visitors tried them, this loaded the server additionally.
Well, for an articles site getting so much traffic is not so devastating but if you are hosting on a so-so server, you'd better migrate your site to a machine that can handle a lot of simultaneous hits. Also, check if your monthly traffic allowance is enough to handle 200-500,000 or even more visitors. It is very amateurish to attract a lot of visitors and not be able to serve them because your server crashed or you have exceeded your bandwidth!

20The snowball effect

But despite the differences in the likes of the different social bookmarking communities, there are striking similarities. You will soon discover that if a post is popular on one of the major sites, this usually drives it up on the other big and smaller sites. Usually it is Digg posts that become popular on StumbleUpon and Reddit but there are many other examples. To use this fact to your best advantage, you may want to concentrate your efforts on getting to the front page of the major players only and bet on the snowball effect to drive you to the top on other sites.
An additional benefit of the snowball effect is that if your posting is interesting and people start blogging about it, you can get tons of backlinks from their blogs. This happened to me and the result was that my PR jumped to 6 on the next update.

 

 

Choosing a SEO Company

After you have been dealing for some time with SEO on your own, you discover that no matter how hard you try, your site does not rank well or that your site ranks well but optimizing it for search engines takes all your time and all your other tasks lag behind. If this is the case with you, maybe it is better to consider hiring a SEO company to do the work for you. With so many SEO companies out there, you can't complain that you have no choice. Or is it just the opposite – so many companies but few reliable?
It is stretching the truth to say that there are no reliable SEO companies. Yes, there might be many scam SEO companies but if you know what to look for when selecting a SEO company, the risk of hiring fraudsters is reduced. It is much better if you yourself have a substantial knowledge of SEO and can easily decide if they promise you the starts in the sky or their goals are realistic but even if you are not quite familiar with SEO practices, here is a list with some points to watch for when choosing a SEO company:
·         Do they promise to guarantee #1 ranking? If they do, you have a serious reason to doubt their competencies. As the Google SEO selection tips say, no one can guarantee a #1 ranking in Google. This is true even for not so competitive words.
·         Get recommendation from friends, business partners, etc. Word of mouth is very important for the credibility of a company. For instance, we do not perform SEO services but despite that we constantly receive e-mails asking for SEO services. We always direct these inquiries to Blackwood Productions because we have worked with this company for a long time and we know that they are competent and reliable.
·         Ask in forums. There are many reputable Web master forums, so if you can't find somebody who can recommend you a SEO company right away, consider asking in Web master forums. However, beware that not all forum posters are honest people, so take their opinion (no matter if positive or negative) with a grain of salt. Forums are not such a reliable source of information as in-person contact.
·         Google the company name. If the company is a known fraudster, chances are that you will find a lot of information about it on the Web. However, lack of negative publicity does not mean automatically that the company is great, nor do some subjective negative opinions mean that the company is a scammer.
·         Ask for examples of sites they have optimized. Happy customers are the best form of promotion, so feel free to ask your potential SEO company about sites they have optimized and references from clients. If you get a rejection because of confidentiality reasons, this must ring a bell about the credibility of the SEO company - former customers are not supposed to be a secret.
·         Check the PR of their own site. If they can't optimize their site well enough to get a good PR (over 4-5), they are not worth hiring.
·         Ask them what keywords their site ranks for. Similarly to the page rank factor, if they don't rank well for the keywords of their choice, they are hardly as professional as they are pretending to be.
·         Do they use automated submissions? If they do, stay away from them. Automated submissions can get you banned from search engines.
·         Do they use any black hat SEO tricks? You need to know in advance what black hat SEO is in order to judge them, so getting familiar with the most important black hat SEO tricks is worth before you go and start cross-examining them.
·         Where do they collect backlinks from? Backlinks are very, very important for SEO success but if they come from link farms and other similar sites, this can cause a lot of trouble. So, make sure the SEO firm collects links from reputable sites only.
·         Get some personal impressions, if possible. Gut instinct and impressions from meetings are also a way to judge a company, though sometimes it is not difficult to get mislead, so use this approach with caution.
·         High price does not guarantee high quality. If you are eager to pay more, this does not mean that you will get more. Just because a firm costs more DOES NOT make them better SEO's. There are many reasons for high prices and high quality is only one of them. For instance, the company might work inefficiently and this is the reason for their ridiculously high costs, not the quality of their work.
·         Cheap is more expensive. This is also true. If you think you can pay peanuts for a professional SEO campaign, then you need to think again. Professional SEO companies offer realistic prices.
·         Use tricky questions. Using tricky questions is a double-edged sword, especially if you are not an expert. But there are several easy questions that can help you.
For instance, you might ask them how many search engines they will automatically submit your site to. If they are scammers, they will try to impress you with big numbers. But in this case, the best answer would be "no automatic submissions".
Another tricky question is to ask them if they will place in you top 10 for some competitive keywords of your choice. The trap here is that it is them, not you, who chooses the words that are best for your site. It is not that probable that they will choose exactly the same words as you suggest, so if they tell you that you just give them the words and they push you to the top, tell them “Goodbye”.
·         Do they offer subscription services? SEO is a constant process and if you want to rank well and keep on like that, efforts are necessary all the time. Because of this, it is better to select a company that includes post-optimization maintenance, than get a company that pushes your site to the top and then leaves you in the wild on your own.
We tried to mention some of the most important issues in selecting a SEO company. Of course, there are many other factors to consider and each case is different, so give it some thought, before you sign the contract for hiring a SEO company.

Keyword Difficulty

The wise choice of the right keywords you will optimize for is the first and crucial step to a successful SEO campaign. If you fail on this very first step, the road ahead is very bumpy and most likely you will only waste your (or your client's) money and time. There are many ways to determine which keywords to optimize for and usually the final list of them is made after a careful analysis of what the online population is searching for, which keywords have your competitors chosen and above all - which are the keywords that you feel describe your site best. All of this is great and certainly this is the way to go but if you want to increase your chances of success, additional research is never too much, especially when its results will save you the shots in the dark.

Dreaming High - Shooting the Top-Notch Keywords?

After you have made a long and detailed list of all the lucrative keywords that are searched by tens of thousands a day, do not hurry yet. It is great that you have chosen popular keywords but it would be even greater if you have chosen keywords for which top positioning is achievable with reasonable effort. If you have many competitors for the keywords you have chosen, chances are, no matter how hard you try, that you will hardly be able to overtake them and place your site amongst the top ten results. And as every SEO knows, if you can't be on the first page (or on the second and in the worst case on the third one) of the organic search results, you'd better think again if the potential gain from optimization for those particular words is worth the effort. It is true that sometimes even sites that are after the first 50 results get decent traffic from search engines but it is certain that you can't count on that. And even if you somehow manage to get to the top, do you have any idea what it will take to keep the good results?
You can feel discouraged that all lucrative keywords are already occupied but it is too early to give up. Low-volume search keywords can be as lucrative as the high-volume ones and their main advantage is that you will have less competition. The SEO experts from Blackwood Productions confirm that it is possible with less effort and within budget to achieve much better results with low-volume search keywords than if you targeted the high-volume search ones. In order to do this, you need to make an estimate about how difficult it would be to rank well for a particular keyword.

Get Down to Earth

The best way to estimate how difficult it would be to rank well for a particular keyword is by using the appropriate tools. If you search the Web, you will see several keyword difficulty tools. Choose a couple of them, for instance Seochat's Keyword Difficulty Tool, Cached's Keyword Difficulty Tool and Seomoz's Keyword Difficulty Tool and off we go. The idea behind choosing multiple tools is not that you have so much free time that you need to find a way to waste it. If you choose only one tool, you will finish your research faster but having in mind the different results that each tool gives, you'd better double check before you start the optimization itself. The Seomoz's tool is a kind of complicated and if you want to use it you need to make several registrations but it is worth the trouble (and the patience - while you wait for the results to be calculated).
You may also want to check for several keywords or keyword phrases. You will be surprised to see how different the estimated difficulty for similar keywords is! For instance, if you are optimizing a financial site, which deals mainly with credits and loans, and some of your keywords are finance, money, credit, loan, and mortgage, running a check with the seochat's Keyword Difficulty Tool produces results like these (the percentages are rounded but you get the idea): finance - 89%, money - 76% credit - 74% loan - 66% mortgage - 65%.
It seems that the keyword finance is very tough and since your site is targeted at credits and loans and not on stock exchange or insurance, which are also branches of finance, there is no need to cry over the fact that it is very difficult to compete for the finance keyword.
The results were similar with the second tool, though it does not give percentages but uses a scale starting from Very Easy to Very Difficult. I did not check all the results with the third tool, because it seems that the seomoz report on keyword difficulty for a particular word needs ages to be compiled but the results were similar, so it becomes clear that it is more feasible to optimize for mortgage and loan, rather than for the broader term finance.
You may want to bookmark some of these tools for future use as well. They will be very useful to monitor possible changes on the keyword difficulty landscape. After you have optimized your site for the keywords you have selected, occasionally check again the difficulty of the keywords you are already optimizing for because the percentages are changing over time and if you discover that the competition for your keywords has increased, make some more efforts to retain the gained positions.

Optimizing for MSN

SEO experts often forget that there are three major search engines. While there is no doubt that Google is the number one with the most searches and Yahoo! manages to get about a quarter of the market, MSN has not retired yet. It holds about 10-15 percent of the searches (according to some sources even less – about 5%) but it has a loyal audience that can't be reached through the other two major search engines, so if you plan a professional SEO campaign, you can't afford to skip MSN. In a sense getting high rankings in MSN is similar to getting high rankings for less popular keywords – because competition is not that tough you might be able to get enough visitors from MSN only in comparison to the case when you have optimized for a more popular search engine.
Although optimizing for MSN is different from optimizing for Google and Yahoo!, there are still common rules that will help you to rank high in any search engine. As a rule, if you rank well in Google, chances are that you will rank well in Yahoo! (if you are interested in the tips and tricks for optimizing for Yahoo!, you want to have a look at the Optimizing for Yahoo! Article) and MSN as well. The opposite is not true, however. If you rank well in MSN, there is no guarantee that you'll do the same in Google. So, when you optimize for MSN, keep an eye on your Google ranking as well. It's no good to top MSN and be nowhere in Google (the opposite is more acceptable, if you need to make the choice).
But why is this so? The answer is simple - the MSN algorithm is different and that is why, even if the same pages were indexed, the search results will vary.

The MSN Algorithm

As already mentioned, it is the different MSN algorithm that leads to such drastic results in ranking. Otherwise, MSN, like all search engines, first spiders the pages on the Web, then indexes them in its database and after that applies the algorithm to generate the pages with the search results. So, the first step in optimizing for MSN is the same as for the other search engines – to have a spiderable site. (Have a look at Search Engine Spider Simulator to see how spiders see your site). If your site is not spiderable, then you don't have even a hypothetical chance to top the search results.
There is quite a lot of speculation about the MSN algorithm. Looking at the search results MSN delivers, it is obvious that its search algorithm is not as sophisticated as Google's, or even Yahoo!'s and many SEO experts agree that the MSN search algorithm is years behind its competitors. So, what can you do in this case? Optimize as you did for Google a couple of years ago? You are not far from the truth, though actually is is not that simple.
One of the most important differences is that MSN still relies heavily on metatags, as explained below. None of the other major search engines uses metatags that heavily anymore. It is obvious that metatags give SEO experts a great opportunity for manipulating search results. Maybe metatags are the main reason for the inaccurate search results that MSN often produces.
The second most important difference between MSN and the other major search engines is their approach to keywords. Well, for MSN keywords are very, very important, too, but unlike Google, for MSN onpage factors are dominating, while offpage factors (like backlinks for example), are still of minor importance. Well, it is a safe bet that the importance of backlinks will be changed in the future but for now they are not a primary factor for high rankings.

Keywords, Keywords, Keywords

It is hardly surprising that keywords are the most important item for MSN. What is surprising is that MSN relies too much on them. It is very easy to fool MSN – just artificially inflate your keyword density, put a couple of keywords in file names (and even better – in domain names) and around the top of the page and you are almost done for MSN. But if you do the above-mentioned black hat practices, your joy of topping MSN will not last long because, unless you provide separate pages that are optimized for Google, your stuffed pages might pretty well get you banned from Google. If you decide to have separate pages for Google and MSN, first, it it hardly worth the trouble, and second, the risk of duplicate content penalty can't be ignored.
So, what is the catch? The catch is that if you try to polish your site for MSN and stuff it with keywords, this might get you into trouble with Google, which certainly is worse than not ranking well in MSN. But if you optimize wisely, it is more likely than not that you will rank decently in Google and perform well in Yahoo! and MSN as well.

Metatags

Having meaningful metatags never hurts but with MSN this is even more important because its algorithm still uses them as a primary factor in calculating search results. Having well-written (not stuffed) metatags will help you with MSN and some other minor search engines, while at the same time well-written metatags will not get you banned from Google.
The Description metatag is very important:
<META NAME=”Description” CONTENT=”Place your description here” />
MSNBot reads its content and based on that (in addition to keywords found on page) judges how to classify your site. So if you leave this tag empty (i.e. CONTENT=””), you have missed a vital chance to be noticed by MSN. There is no evidence that MSN uses the other metatags in its algorithm that is why leaving the Description metatag empty is even more unforgivable.

 

Web Directories and Specialized Search Engines

SEO experts spend most of their time optimizing for Google and occasionally one or two other search engines. There is nothing wrong in it and it is most logical, having in mind that topping Google is the lion's share in Web popularity but very often, no matter what you do, topping Google does not happen. Or sometimes, the price you need to pay (not literally but in terms of effort and time) to top Google and keep there is too high. Maybe we should mention here the ultimate SEO nightmare – being banned from Google, when you simply can't use Google (or not at least until you are readmitted to the club) and no matter if you like it or not, you need to have a look about possible alternatives.

What are Google Alternatives

The first alternative to Google is obvious – optimize for the other major search engines, if you have not done it already. Yahoo! and MSN (to a lesser degree) can bring you enough visitors, though sometimes it is virtually impossible to optimize for the three of them at the same time because of the differences in their algorithms. You could also optimize your site for (or at least submit to) some of the other search engines (Lycos, Excite, Netscape, etc.) but having in mind that they altogether hardly have over 3-5% of the Web search traffic, do not expect much.
Another alternative is to submit to search directories (also known as Web directories) and specialized search engines. Search directories might sound so pre-Google but submitting to the right directories might prove better than optimizing for MSN, for example. Specialized search engines and portals have the advantage that the audience they attract consists of people who are interested in a particular topic and if this is your topic, you can get to your target audience directly. It is true that specialized search engines will not bring you as many visitors, as if you were topping Google but the quality of these visitors is extremely high.
Naming all Google alternatives would be a long list and it is outside the scope of this article but just to be a little more precise about what alternatives exist, we cannot skip SEO instruments like posting to blogs and forums or paid advertisements.

Web Directories

What is a Web Directory?

Web directories (or as they are better known – search directories) existed before the search engines, especially Google, became popular. As the name implies, web directories are directories where different resources are gathered. Similarly to desktop directories, where you gather files in a directory based on some criterion, Web directories are just enormous collections of links to sites, arranged in different categories. The sites in a Web directory are listed in some order (most often alphabetic but it is not necessarily so) and users browse through them.
Although many Web directories offer a search functionality of some kind (otherwise it will be impossible to browse thousands of pages for let's say Computers), search directories are fundamentally different from search engines in the two ways – most directories are edited by humans and URLs are not gathered automatically by spiders but submitted by site owners. The main advantage of Web directories is that no matter how clever spiders become, when there is a human to view and check the pages, there is a lesser chance that pages will be classified in the wrong categories. The disadvantages of the first difference are that the lists in web directories are sometimes outdated, if no human was available to do the editing and checking for some time (but this is not that bad because search engines also deliver pages that do not exist anymore) and that sometimes you might have to wait half an year before being included in a search directory.
The second difference – no spiders – means that you must go and submit your URL to the search directory, rather than sit and wait for the spider to come to your site. Fortunately, this is done only once for each directory, so it is not that bad.
Once you are included in a particular directory, in most cases you can stay there as long as you wish to and wait for people (and search engines) to find you. The fact that a link to your site appears in a respectable Web directory is good because first, it is a backlink and second, you increase your visibility for spiders, which in turn raises your chance to be indexed by them.

Examples of Web Directories

There are hundreds and thousands of search directories but undoubtedly the most popular one is DMOZ. It is a general purpose search directory and it accepts links to all kinds of sites. Other popular general-purpose search directories are Google Directory and Yahoo! Directory. The Best of the Web is one of the oldest Web directories and it still keeps to high standards in selecting sites.
Besides general-purpose Web directories, there are incredibly many topical ones. For instance, the The Environment Directory lists links to environmental sites only, while The Radio Directory lists thousands of radio stations worldwide, arranged by country, format, etc. There are also many local and national Web directories, which accept links to sites about a particular region or country only and which can be great if your site is targeted at local and national audience only. You see, it is not possible to mention even the topics of specialized search directories only because the list will get incredibly long. Using Google and specialized search resources like The Search Engines Directory, you can find on your own many directories that are related to your area of interest.

Specialized Search Engines

What is a Specialized Search Engine?

Specialized search engines are one more tool to include in your SEO arsenal. Unlike general-purpose search engines, specialized search engines index pages for particular topics only and very often there are many pages that cannot be found in general-purpose search engines but only in specialized ones. Some of the specialized search engines are huge sites that actually host the resources they link to, or used to be search directories but have evolved to include links not only to sites that were submitted to them. There are many specialized search engines for every imaginable topic and it is always wise to be aware of the specialized search engines for your niche. The examples in the next section are by no means a full list of specialized search engines but are aimed to give you the idea of what is available. If you search harder on the Web, you will find many more resources.

Examples of Specialized Search Engines

Probably specialized search engines are not that numeric as Web directories but still certainly there is no shortage of them either, especially if one counts password-protected sites with database accessible only from within the site as a specialized search engine. As with Web directories, if there were a list of specialized search engines it would be really, really long (and constantly changing), so instead, here are some links to lists of search engines: Pandia Powersearch, Webquest, Virtual Search Engines, the already mentioned The Search Engines Directory, etc. What is common for these lists is that they offer a selection of specialized search engines, arranged by topic, so it is a good starting point for the hunt of specialized search engines.

Importance of Sitemaps

There are many SEO tips and tricks that help in optimizing a site but one of those, the importance of which is sometimes underestimated is sitemaps. Sitemaps, as the name implies, are just a map of your site - i.e. on one single page you show the structure of your site, its sections, the links between them, etc. Sitemaps make navigating your site easier and having an updated sitemap on your site is good both for your users and for search engines. Sitemaps are an important way of communication with search engines. While in robots.txt you tell search engines which parts of your site to exclude from indexing, in your site map you tell search engines where you'd like them to go.
Sitemaps are not a novelty. They have always been part of best Web design practices but with the adoption of sitemaps by search engines, now they become even more important. However, it is necessary to make a clarification that if you are interested in sitemaps mainly from a SEO point of view, you can't go on with the conventional sitemap only (though currently Yahoo! and MSN still keep to the standard html format). For instance, Google Sitemaps uses a special (XML) format that is different from the ordinary html sitemap for human visitors.
One might ask why two sitemaps are necessary. The answer is obvious - one is for humans, the other is for spiders (for now mainly Googlebot but it is reasonable to expect that other crawlers will join the club shortly). In that relation it is necessary to clarify that having two sitemaps is not regarded as duplicate content. In 'Introduction to Sitemaps', Google explicitly states that using a sitemap will never lead to penalty for your site.

Why Use a Sitemap

Using sitemaps has many benefits, not only easier navigation and better visibility by search engines. Sitemaps offer the opportunity to inform search engines immediately about any changes on your site. Of course, you cannot expect that search engines will rush right away to index your changed pages but certainly the changes will be indexed faster, compared to when you don't have a sitemap.
Also, when you have a sitemap and submit it to the search engines, you rely less on external links that will bring search engines to your site. Sitemaps can even help with messy internal links - for instance if you by accident have broken internal links or orphaned pages that cannot be reached in other way (though there is no doubt that it is much better to fix your errors than rely on a sitemap).
If your site is new, or if you have a significant number of new (or recently updated pages), then using a sitemap can be vital to your success. Although you can still go without a sitemap, it is likely that soon sitemaps will become the standard way of submitting a site to search engines. Though it is certain that spiders will continue to index the Web and sitemaps will not make the standard crawling procedures obsolete, it is logical to say that the importance of sitemaps will continue to increase.
Sitemaps also help in classifying your site content, though search engines are by no means obliged to classify a page as belonging to a particular category or as matching a particular keyword only because you have told them so.
Having in mind that the sitemap programs of major search engines (and especially Google) are still in beta, using a sitemap might not generate huge advantages right away but as search engines improve their sitemap indexing algorithms, it is expected that more and more sites will be indexed fast via sitemaps.

Generating and Submitting the Sitemap

The steps you need to perform in order to have a sitemap for your site are simple. First, you need to generate it, then you upload it to your site, and finally you notify Google about it.
Depending on your technical skills, there are two ways to generate a sitemap - to download and install a sitemap generator or to use an online sitemap generation tool. The first is more difficult but you have more control over the output. You can download the Google sitemap generator from here. After you download the package, follow the installation and configuration instructions in it. This generator is a Python script, so your Web server must have Python 2.2 or later installed, in order to run it.
The second way to generate a sitemap is easier. There are many free online tools that can do the job for you. For instance, have a look at this collection of Third-party Sitemap tools. Although Google says explicitly that it has neither tested, nor verified them, this list will be useful because it includes links to online generators, downloadable sitemap generators, sitemap plugins for popular content-management systems, etc., so you will be able to find exactly what you need.
After you have created the sitemap, you need to upload it to your site (if it is not already there) and notify Google about its existence. Notifying Google includes adding the site to your Google Sitemaps account, so if you do not have an account with Google, it is high time to open one. Another detail that is useful to know in advance is that in order to add the sitemap to your account, you need to verify that you are the legitimate owner of the site.
Currently Yahoo! and MSN do not support sitemaps, or at least not in the XML format, used by Google. Yahoo! allows webmasters to submit “a text file with a list of URLs” (which can actually be a stripped-down version of a site map), while MSN does not offer even that but there are rumors that it is indexing sitemaps when they are available onsite. Most likely this situation will change in the near future and both Yahoo! and MSN will catch with Google because user-submitted site maps are just a too powerful SEO tool and cannot be ignored.

How to Build Backlinks

It is out of question that quality backlinks are crucial to SEO success. More, the question is how to get them. While with on-page content optimization it seems easier because everything is up to you to do and decide, with backlinks it looks like you have to rely on others to work for your success. Well, this is partially true because while backlinks are links that start on another site and point to yours, you can discuss with the Web master of the other site details like the anchor text, for example. Yes, it is not the same as administering your own sites – i.e. you do not have total control over backlinks – but still there are many aspects that can be negotiated.

Getting Backlinks the Natural Way

The idea behind including backlinks as part of the page rank algorithm is that if a page is good, people will start linking to it. And the more backlinks a page has, the better. But in practice it is not exactly like this. Or at least you cannot always rely on the fact that your contents is good and people will link to you. Yes, if your content is good and relevant you can get a lot of quality backlinks, including from sites with similar topic as yours (and these are the most valuable kind of backlinks, especially if the anchor text contains your keywords) but what you get without efforts could be less than what you need to successfully promote your site. So, you will have to resort to other ways of acquiring quality backlinks as described next.

Ways to Build Backlinks

Even if plenty of backlinks come to your site the natural way, additional quality backlinks are always welcome and the time you spend building them is not wasted. Among the acceptable ways of building quality backlinks are getting listed in directories, posting in forums, blogs and article directories. The unacceptable ways include inter-linking (linking from one site to another site, which is owned by the same owner or exists mainly for the purpose to be a link farm), linking to spam sites or sites that host any kind of illegal content, purchasing links in bulk, linking to link farms, etc.
The first step in building backlinks is to find the places from which you can get quality backlinks. A valuable assistant in this process is the Backlink Builder tool. When you enter the keywords of your choice, the Backlink Builder tool gives you a list of sites where you can post an article, message, posting, or simply a backlink to your site. After you have the list of potential backlink partners, it is up to you to visit each of the sites and post your content with the backlink to your site in it.
You might wonder why sites as those, listed by the Backlink Builder tool provide such a precious asset as backlinks for free. The answer is simple – they need content for their site. When you post an article, or submit a link to your site, you do not get paid for this. You provide them for free with something they need – content – and in return they also provide you for free with something you need – quality backlinks. It is a free trade, as long as the sites you post your content or links are respected and you don't post fake links or content.

Getting Listed in Directories

If you are serious about your Web presence, getting listed in directories like DMOZ and Yahoo is a must – not only because this is a way to get some quality backlinks for free, but also because this way you are easily noticed by both search engines and potential visitors. Generally inclusion in search directories is free but the drawback is that sometimes you have to wait a couple of months before you get listed in the categories of your choice.

Forums and Article Directories

Generally search engines index forums so posting in forums and blogs is also a way to get quality backlinks with the anchor text you want. If the forum or blog is a respected one, a backlink is valuable. However, in some cases the forum or blog administrator can edit your post, or even delete it if it does not fit into the forum or blog policy. Also, sometimes administrators do not allow links in posts, unless they are relevant ones. In some rare cases (which are more an exception than a rule) the owner of a forum or a blog would have banned search engines from indexing it and in this case posting backlinks there is pointless.
While forum postings can be short and do not require much effort, submitting articles to directories can be more time-consuming because generally articles are longer than posts and need careful thinking while writing them. But it is also worth and it is not so difficult to do.

Content Exchange and Affiliate Programs

Content exchange and affiliate programs are similar to the previous method of getting quality backlinks. For instance, you can offer to interested sites RSS feeds for free. When the other site publishes your RSS feed, you will get a backlink to your site and potentially a lot of visitors, who will come to your site for more details about the headline and the abstract they read on the other site.
Affiliate programs are also good for getting more visitors (and buyers) and for building quality backlinks but they tend to be an expensive way because generally the affiliate commission is in the range of 10 to 30 %. But if you have an affiliate program anyway, why not use it to get some more quality backlinks?

News Announcements and Press Releases

Although this is hardly an everyday way to build backlinks, it is an approach that gives good results, if handled properly. There are many sites (for instance, here is a list of some of them) that publish for free or for a fee news announcements and press releases. A professionally written press release about an important event can bring you many, many visitors and the backlink from a respected site to yours is a good boost to your SEO efforts. The tricky part is that you cannot release press releases if there is nothing newsworthy. That is why we say that news announcements and press releases are not a commodity way to build backlinks.

Backlink Building Practices to Avoid

One of the practices that is to be avoided is link exchange. There are many programs, which offer to barter links. The principle is simple – you put a link to a site, they put a backlink to your site. There are a couple of important things to consider with link exchange programs. First, take care about the ratio between outbound and inbound links. If your outbound links are times your inbound, this is bad. Second (and more important) is the risk that your link exchange partners are link farms. If this is the case, you could even be banned from search engines, so it is too risky to indulge in link exchange programs.
Linking to suspicious places is something else that you must avoid. While it is true that search engines do not punish you if you have backlinks from such places because it is supposed that you have no control over what bad guys link to, if you enter a link exchange program with the so called bad neighbors and you link to them, this can be disastrous to your SEO efforts. For more details about bad neighbors, check the Bad Neighborhood article. Also, beware of getting tons of links in a short period of time because this still looks artificial and suspicious.

Reinclusion in Google

Even if you are not looking for trouble and do not violate any known SEO rule (but only half of them), you still might have to experience the ultimate SEO nightmare - being excluded from Google's index. Although Google is a kind of a monopolist among search engines, it is not a bully company that excludes innocent victims for pure pleasure. Google keeps rigorously to SEO best practices and excludes sites that misbehave.

Not Present in Google's Index

First, it is necessary to clarify that the fact your site is missing from Google's index can mean two things:
a. You have not been included yet, though you have submitted an inclusion request. As described in the Google Sandbox article, it is normal to have to wait some time before being indexed for the first time. You can't to anything to speed the process but wait.
b. You have been excluded from Google's index because of violation on your site. As said, this is a real nightmare for any SEO and you will need to take some steps to correct this most unfavorable situation. The rest of the article explains how.

Why Does Google Exclude Sites?

There are many reasons that can make Google exclude your site(s) and all these reasons are related to a violation of some kind. For instance, your sites are over-optimized and this makes them very suspicious. Over-optimization has many faces and you can have a look at the Optimization, Over-Optimization or SEO Overkill? Article to get some ideas of practices that you should avoid.
Besides over-optimizing the onsite content, some of the other reasons for being excluded from Google are SE spamming, hidden text, hosting illegal content, linking to bad neighbors, inter-linking, etc. There is no an exhaustive list of SEO sins that Google does not tolerate, nor you'll get a letter from Google to inform you that you have been a bad boy and that's why you have been kicked out of its index but if you resort to any forms of SEO manipulation and you attempt to mislead search engines, you might expect that sooner or later you will have to deal with reinclusion.

Reinclusion Steps

After you discover that you have been excluded from Google, the first step is to analyze why. You need to know what made them angry with you and correct your mistakes. Check for links to link farms and bed neighbors, for doorway pages and keyword stuffing. It is unlikely that you don't know your own sins.
Next, you have to contact Google with a reinclusion request. Go to Google Sitemaps and from the Tools menu on the right, select Submit a Reinclusion Request. On the next screen, read carefully the instructions and explanations, fill in the required data (you may want to have a look at the next section - Reinclusion Tips for ideas what to write) and submit your request.
After you submit your inclusion request, there is nothing more you can do than fix your errors (if you have not already done it) and wait patiently for the answer.
Though the process of submitting a reinclusion request is pretty straightforward, there is some general advice, which can help you. The following tips can improve your chances of success.

Reinclusion Tips

·         Admit your errors and fix them
This has already been said but it is a big mistake to write to Google and play innocent. You can lie to yourself but this way you will not convince them that you are a martyr who has been suffering because of their cruelty. And above all - fix your mistakes before you submit the reinclusion request. It is a very stupid situation to have your errors unfixed and wait for reinclusion because you will simply never get reincluded this way. What is more, you are undermining your chances for success in the future as well.
·         Be polite.
The worst mistake you can make in your reinclusion request is to be rude. Threatening Google with lawsuits or hinting that you might boycott their AdWords program in revenge for being excluded from their index is a deadly mistake. Anyway, Google are not obliged to provide you with free traffic, so being included in their index is not a special privilege they had granted you for your AdWords money.
·         Look at their Webmasters Guidelines.
It is unlikely that they have changed them recently and you do not comply with them anymore but it does not hurt to double check that you have done what Google recommends to do.
·         Don't spam them.
Google receives heaps of e-mails and it is not possible to answer each incoming e-mail an hour or so after it had been submitted. Bombarding Google with tons of e-mails (even polite ones) could only make your situation worse.
·         Is it your first time?
Google may not keep statistics of its recidivists but if it just happens that your site gets bans several times a year, this gets very suspicious. If you are banned for the first time, you can account on amnesty. But if you have been banned many times, you can be out of luck with reinclusion requests about the same site.
·         Reaasure them that it is not going to happen again.
This is also very important because if Google get the impression that you violate their rules very often, they might be reluctant to reinclude you. In some cases, when it was not your personal fault - e.g. webmaster you hired sent many spam letters or your site got hacked, you can explain what happened, giving a detailed timeframe of the events.
·         Consider AdWords
If you really rely heavily on traffic from Google, consider buying AdWords. This is not a blackmail (we ban you, you pay for AdWords) because many sites just do not pay for AdWords but rely on other traffic-generating schemes instead.

Optimizing Flash Sites

If there is a really hot potato that divides SEO experts and Web designers, this is Flash. Undoubtedly a great technology to include sounds and picture on a Web site, Flash movies are a real nightmare for SEO experts. The reason is pretty prosaic – search engines cannot index (or at least not easily) the contents inside a Flash file and unless you feed them with the text inside a Flash movie, you can simply count this text lost for boosting your rankings. Of course, there are workarounds but until search engines start indexing Flash movies as if they were plain text, these workarounds are just a clumsy way to optimize Flash sites, although certainly they are better than nothing.

Why Search Engines Dislike Flash Sites?

Search engines dislike Flash Web sites not because of their artistic qualities (or the lack of these) but because Flash movies are too complex for a spider to understand. Spiders cannot index a Flash movie directly, as they do with a plain page of text. Spiders index filenames (and you can find tons of these on the Web), but not the contents inside.
Flash movies come in a proprietary binary format (.swf) and spiders cannot read the insides of a Flash file, at least not without assistance. And even with assistance, do not count that spiders will crawl and index all your Flash content. And this is true for all search engines. There might be differences in how search engines weigh page relevancy but in their approach to Flash, at least for the time beings, search engines are really united – they hate it but they index portions of it.

What (Not) to Use Flash For?

Despite the fact that Flash movies are not spider favorites, there are cases when a Flash movie is worth the SEO efforts. But as a general rule, keep Flash movies at a minimum. In this case less is definitely better and search engines are not the only reason. First, Flash movies, especially banners and other kinds of advertisement, distract users and they generally tend to skip them. Second, Flash movies are fat. They consume a lot of bandwidth, and although dialup days are over for the majority of users, a 1 Mbit connection or better is still not the standard one.
Basically, designers should keep to the statement that Flash is good for enhancing a story, but not for telling it – i.e. you have some text with the main points of the story (and the keywords that you optimize for) and then you have the Flash movie to add further detail or just a visual representation of the story. In that connection, the greatest SEO sin is to have the whole site made in Flash! This is is simply unforgivable and do not even dream of high rankings!
Another “No” is to use Flash for navigation. This applies not only to the starting page, where once it was fashionable to splash a gorgeous Flash movie but external links as well. Although it is a more common mistake to use images and/or javascript for navigation, Flash banners and movies must not be used to lead users from one page to another. Text links are the only SEO approved way to build site navigation.

Workarounds for Optimizing Flash Sites

Although a workaround is not a solution, Flash sites still can be optimized. There are several approaches to this:
·         Input metadata
This is a very important approach, although it is often underestimated and misunderstood. Although metadata is not as important to search engines as it used to be, Flash development tools allow easily to add metadata to your movies, so there is no excuse to leave the metadata fields empty.
·         Provide alternative pages
For a good site it is a must to provide html only pages that do not force the user to watch the Flash movie. Preparing these pages requires more work but the reward is worth because not only users, but search engines as well will see the html only pages.
·         Flash Search Engine SDK
This is the life-belt. The most advanced tool to extract text from a Flash movie. One of the handiest applications in the Flash Search Engine SDK is the tool named swf2html. As it name implies, this tool extracts text and links from a Macromedia Flash file and writes the output unto a standard HTML document, thus saving you the tedious job to do it manually.
However, you still need to have a look at the extracted contents and correct it, if necessary. For example, the order in which the text and links is arranged might need a little restructuring in order to put the keyword-rich content in the title and headings or in the beginning of the page.
Also, you need to check if there is no duplicate content among the extracted sentences and paragraphs. The font color of the extracted text is also another issue. If the font color of the extracted text is the same as the background color, you will run into hidden text territory.

·         SE-Flash.com
Here is a tool that visually shows what from your Flash files is visible to search engines and what is not. This tool is very useful, even if you already have the Flash Search Engine SDK installed because it provides one more check of the accuracy of the extracted text. Besides, it is not certain that Google and the other search engines use Flash Search Engine SDK to get contents from a Flash file, so this tool might give completely different results from those that the SDK will produce.
These approaches are just some of the most important examples of how to optimize Flash sites. There are many other approaches as well. However, not all of them are brilliant and clear, or they can be classified on the boundary of ethical SEO – e.g. creating invisible layers of text that is delivered to spiders instead the Flash movie itself. Although this technique is not wrong – i.e. there is no duplicate or fake content, it is very similar to cloaking and doorway pages and it is better to avoid it.

 

 

Bad Neighborhood

Has it ever happened to you to have a perfectly optimized site with lots of links and content and the right keyword density and still do not rank high in search engines? Probably every SEO has experienced this. The reasons for such kind of failure can be really diverse – starting from the sandbox effect (your site just needs time to get mature), to overoptimization and inappropriate online relations (i.e. the so called “bad neighborhood” effect).
While there is not much you can do about the sandbox effect but wait, in most other cases it is up to you to counteract the negative effects you are suffering from. You just need to figure out what is stopping you from achieving the deserved rankings. Careful analysis of your site and the sites that link to you can give you ideas where to look for for the source of trouble and deal with it. If it is overoptimization – remove excessive stuffing; if it is bad neighbors – say “goodbye” to them. We have already deals with overoptimization as a SEO overkill and in this article we will have a look at another frequent rankings killer.

Link Wisely, Avoid Bad Neighbors

It is a known fact that one of the most important items for high rankings, especially with Google, are links. The Web is woven out of links and inbound and outbound links are most natural. Generally, the more inbound links (i.e. other sites link to you) you have, the better. On the contrary, if you have many outbound links, this is not very good. And what is worse – it can be disastrous, if you link to improper places – i.e. bad neighbors. The concept is hardly difficult to comprehend – it is so similar to real life: if you choose outlaws or bad guys for friends, you are considered to be one of them.
It might look unfair to be penalized for things that you have not done but linking to sites with bad reputation is equal to a crime for search engines and by linking to such a site, you can expect to be penalized as well. And yes, it is fair because search engines do penalize sites that use different tricks to manipulate search results. In a way, in order to guarantee the integrity of search results, search engines cannot afford to tolerate unethical practices.
However, search engines tend to be fair and do not punish you for things that are out of your control. If you have many inbound links from suspicious sites, this will not be regarded as a malpractice on your side because generally it is their Web master, not you, who has put all these links. So, inbound links, no matter where they come from, cannot harm you. But if in addition to inbound links, you have a considerable amount of outbound links to such sites, in a sense you vote for them. Search engines consider this as malpractice and you will get punished.

Why Do Some Sites Get Labelled as Bad Neighbors?

We have already mentioned in this article some of the practices that are a reason for search engines to ban particular sites. But the “sins” are not only limited to being a spam domain. Generally, companies get blacklisted because they try to boost their ranking by using illegal techniques such as keyword stuffing, duplicate content (or lack of any original content), hidden text and links, doorway pages, deceptive titles, machine-generated pages, copyright violators, etc. Search engines also tend to dislike meaningless link directories that conceive the impression that they are topically arranged, so if you have a fat links section on your site, double-check what you link to.

Figuring Out Who's Good, Who's Not

Probably the question that is popping is: “But since the Web is so vast and so constantly changing, how can I know who is good and who is bad?” Well, you don't have to know each of the sites on the black list, even if it were possible. The black list itself is changing all the time but it looks like there will always be companies and individuals who are eager to earn some cash by spamming, disseminating viruses and porn or simply performing fraudulent activities.
The first check you need to perform when you have doubts that some of the sites you are linking to are bed neighbors is to see if they are included in the indices of Google and the other search engines. Type “site:siteX.com”, where “siteX.com” is the site you are performing a check about and see if Google returns any results from it. If it does not return any results, chances are that this site is banned from Google and you should immediately remove any outbound links to siteX.com.
If you have outbound links to many different sites, such checks might take a lot of time. Fortunately, there are tools that can help you in performing this task. The CEO of Blackwood Productions has recommended http://www.bad-neighborhood.com/ as one of the reliable tools that reports links to and from suspicious sites and sites that are missing in Google's index.

What is Robots.txt

Robots.txt

It is great when search engines frequently visit your site and index your content but often there are cases when indexing parts of your online content is not what you want. For instance, if you have two versions of a page (one for viewing in the browser and one for printing), you'd rather have the printing version excluded from crawling, otherwise you risk being imposed a duplicate content penalty. Also, if you happen to have sensitive data on your site that you do not want the world to see, you will also prefer that search engines do not index these pages (although in this case the only sure way for not indexing sensitive data is to keep it offline on a separate machine). Additionally, if you want to save some bandwidth by excluding images, stylesheets and javascript from indexing, you also need a way to tell spiders to keep away from these items.
One way to tell search engines which files and folders on your Web site to avoid is with the use of the Robots metatag. But since not all search engines read metatags, the Robots matatag can simply go unnoticed. A better way to inform search engines about your will is to use a robots.txt file.

What Is Robots.txt?

Robots.txt is a text (not html) file you put on your site to tell search robots which pages you would like them not to visit. Robots.txt is by no means mandatory for search engines but generally search engines obey what they are asked not to do. It is important to clarify that robots.txt is not a way from preventing search engines from crawling your site (i.e. it is not a firewall, or a kind of password protection) and the fact that you put a robots.txt file is something like putting a note “Please, do not enter” on an unlocked door – e.g. you cannot prevent thieves from coming in but the good guys will not open to door and enter. That is why we say that if you have really sen sitive data, it is too naïve to rely on robots.txt to protect it from being indexed and displayed in search results.
The location of robots.txt is very important. It must be in the main directory because otherwise user agents (search engines) will not be able to find it – they do not search the whole site for a file named robots.txt. Instead, they look first in the main directory (i.e. http://mydomain.com/robots.txt) and if they don't find it there, they simply assume that this site does not have a robots.txt file and therefore they index everything they find along the way. So, if you don't put robots.txt in the right place, do not be surprised that search engines index your whole site.
The concept and structure of robots.txt has been developed more than a decade ago and if you are interested to learn more about it, visit http://www.robotstxt.org/ or you can go straight to the Standard for Robot Exclusion because in this article we will deal only with the most important aspects of a robots.txt file. Next we will continue with the structure a robots.txt file.

Structure of a Robots.txt File

The structure of a robots.txt is pretty simple (and barely flexible) – it is an endless list of user agents and disallowed files and directories. Basically, the syntax is as follows:
User-agent:
Disallow:
User-agent” are search engines' crawlers and disallow: lists the files and directories to be excluded from indexing. In addition to “user-agent:” and “disallow:” entries, you can include comment lines – just put the # sign at the beginning of the line:
# All user agents are disallowed to see the /temp directory.
User-agent: *
Disallow: /temp/

The Traps of a Robots.txt File

When you start making complicated files – i.e. you decide to allow different user agents access to different directories – problems can start, if you do not pay special attention to the traps of a robots.txt file. Common mistakes include typos and contradicting directives. Typos are misspelled user-agents, directories, missing colons after User-agent and Disallow, etc. Typos can be tricky to find but in some cases validation tools help.
The more serious problem is with logical errors. For instance:
User-agent: *
Disallow: /temp/
User-agent: Googlebot
Disallow: /images/
Disallow: /temp/
Disallow: /cgi-bin/
The above example is from a robots.txt that allows all agents to access everything on the site except the /temp directory. Up to here it is fine but later on there is another record that specifies more restrictive terms for Googlebot. When Googlebot starts reading robots.txt, it will see that all user agents (including Googlebot itself) are allowed to all folders except /temp/. This is enough for Googlebot to know, so it will not read the file to the end and will index everything except /temp/ - including /images/ and /cgi-bin/, which you think you have told it not to touch. You see, the structure of a robots.txt file is simple but still serious mistakes can be made easily.

Tools to Generate and Validate a Robots.txt File

Having in mind the simple syntax of a robots.txt file, you can always read it to see if everything is OK but it is much easier to use a validator, like this one: http://tool.motoricerca.info/robots-checker.phtml. These tools report about common mistakes like missing slashes or colons, which if not detected compromise your efforts. For instance, if you have typed:
User agent: *
Disallow: /temp/
this is wrong because there is no slash between “user” and “agent” and the syntax is incorrect.
In those cases, when you have a complex robots.txt file – i.e. you give different instructions to different user agents or you have a long list of directories and subdirectories to exclude, writing the file manually can be a real pain. But do not worry – there are tools that will generate the file for you. What is more, there are visual tools that allow to point and select which files and folders are to be excluded. But even if you do not feel like buying a graphical tool for robots.txt generation, there are online tools to assist you. For instance, the Server-Side Robots Generator offers a dropdown list of user agents and a text box for you to list the files you don't want indexed. Honestly, it is not much of a help, unless you want to set specific rules for different search engines because in any case it is up to you to type the list of directories but is more than nothing.

Jumping Over the Google Sandbox

It's never easy for newcomers to enter a market and there are barriers of different kinds. For newcomers to the world of search engines, the barrier is called a sandbox – your site stays there until it gets mature enough to be allowed to the Top Positions club. Although there is no direct confirmation of the existence of a sandbox, Google employees have implied it and SEO experts have seen in practice that new sites, no matter how well optimized, don't rank high on Google, while on MSN and Yahoo they catch quickly. For Google, the jailing in the sandbox for new sites with new domains is on average 6 months, although it can vary from less than a month to over 8 months.

Sandbox and Aging Delay

While it might be considered unfair to stop new sites by artificial means like keeping them at the bottom of search results, there is a fair amount of reasoning why search engines, and above all Google, have resorted to such measures. With blackhat practices like bulk buying of links, creation of duplicate content or simply keyword stuffing to get to the coveted top, it is no surprise that Google chose to penalize new sites, which overnight get tons of backlinks, or which are used as a source of backlinks to support an older site (possibly owned by the same company). Needless to say, when such fake sites are indexed and admitted to top positions, this deteriorates search results, so Google had to take measures for ensuring that such practices will not be tolerated. The sandbox effect works like a probation period for new sites and by making the practice of farming fake sites a long-term, rather than a short-term payoff for site owners, it is supposed to decrease its use.
Sandbox and aging delay are similar in meaning and many SEO experts use them interchangeably. Aging delay is more self-explanatory – sites are “delayed” till they come of age. Well, unlike in legislation, with search engines this age is not defined and it differs. There are cases when several sites were launched in the same day, were indexed within a week from each other but the aging delay for each of them expired in different months. As you see, the sandbox is something beyond your control and you cannot avoid it but still there are steps you can undertake to minimize the damage for new sites with new domains.

Minimizing Sandbox Damages

While Google sandbox is not something you can control, there are certain steps you can take in order to make the sandbox effect less destructive for your new site. As with many aspects of SEO, there are ethical and unethical tips and tricks and unethical tricks can get you additional penalties or a complete ban from Google, so think twice before resorting to them. The unethical approaches will not be discussed in this article because they don comply with our policy.
Before we delve into more detail about particular techniques to minimize sandbox damage, it is necessary to clarify the general rule: you cannot fight the sandbox. The only thing you can do is to adapt to it and patiently wait for time to pass. Any attempts to fool Google – starting from writing melodramatic letters to Google, to using “sandbox tools” to bypass the filter – can only make your situation worse. There are many initiatives you can take, while in the sandbox, for as example:
·         Actively gather content and good links – as time passes by, relevant and fresh content and good links will take you to the top. When getting links, have in mind that they need to be from trusted sources – like DMOZ, CNN, Fortune 500 sites, or other reputable places. Also, links from .edu, .gov, and .mil domains might help because these domains are usually exempt from the sandbox filter. Don't get 500 links a month – this will kill your site! Instead, build links slowly and steadily.
·         Plan ahead– contrary to the general practice of launching a site when it is absolutely complete, launch a couple of pages, when you have them. This will start the clock and time will be running parallel to your site development efforts.
·         Buy old or expired domains – the sandbox effect is more serious for new sites on new domains, so if you buy old or expired domains and launch your new site there, you'll experience less problems.
·         Host on a well- established host – another solution is to host your new site on a subdomain of a well-established host (however, free hosts are generally not a good idea in terms of SEO ranking). The sandbox effect is not so severe for new subdomains (unless the domain itself is blacklisted). You can also host the main site on a subdomain and on a separate domain host just some contents, linked with the main site. You can also use redirects from the subdomained site to the new one, although the effect of this practice is also questionable because it can also be viewed as an attempt to fool Google.
·         Concentrate on less popular keywords – the fact that your site is sandboxed does not mean that it is not indexed by Google at all. On the contrary, you could be able to top the search results from the very beginning! Looking like a contradiction with the rest of the article? Not at all! You could top the results for less popular keywords – sure, it is better than nothing. And while you wait to get to the top for the most lucrative keywords, you can discover that even less popular keywords are enough to keep the ball rolling, so you may want to make some optimization for them.
·         Rely more on non-Google ways to increase traffic – it is often reminded that Google is not the only search engine or marketing tool out there. So if you plan your SEO efforts to include other search engines, which either have no sandbox at all or the period of stay there is relatively short, this will also minimize the damages of the sandbox effect.

Optimizing for Yahoo!

Back in the dawn of the Internet, Yahoo! was the most popular search engine. When Google arrived, its indisputably precise search results made it the preferred search engine. However, Google is not the only search engine and it is estimated that about 20-25% or searches are conducted on Yahoo! Another major player on the market is MSN, which means that SEO professionals cannot afford to optimize only for Google but need to take into account the specifics of the other two engines (Yahoo! and MSN) as well.
Optimizing for three search engines at the same time is not an easy task. There were times, when the SEO community was inclined to think that the algorithm of Yahoo! was on deliberately just the opposite to the Google algorithm because pages that ranked high in Google did not do so well in Yahoo! and vice versa. The attempt to optimize a site to appeal to both search engines usually lead to being kicked out of the top of both of them.
Although there is no doubt that the algorithms of the two search engines are different, since both are constantly changing, none of them is made publicly available by its authors and the details about how each of the algorithms function are obtained by speculation based on probe-trial tests for particular keywords, it is not possible to say for certain what exactly is different. What is more, having in mind the frequency with which algorithms are changed, it is not possible to react to every slight change, even if algorithms' details were known officially. But knowing some basic differences between the two does help to get better ranking. A nice visual representation of the differences in positioning between Yahoo! and Google gives the Yahoo vs Google tool.

The Yahoo! Algorithm - Differences With Google

Like all search engines, Yahoo! too spiders the pages on the Web, indexes them in its database and later performs various mathematical operations to produce the pages with the search results. Yahoo! Slurp (the Yahoo! spiderbot) is the the second most active spider crawler on the Web. Yahoo! Slurp is not different from the other bots and if your page misses important elements of the SEO mix that make it not spiderable, then it hardly makes a difference which algorithm will be used because you will never get to a top position. (You may want to try the Search Engine Spider Simulator and check what of your pages is spiderable).
Yahoo! Slurp might be even more active than Googlebot because occasionally there are more pages in the Yahoo! index than in Google. Another alleged difference between Yahoo! and Google is the sandbox (putting the sites “on hold” for some time till they appear in search results). Google's sandbox is deeper, so if you have made recent changes to your site, you might have to wait a month or two (shorter for Yahoo! and longer for Google) till these changes are reflected in the search results.
With new major changes in the Google algorithm under way (the so-called “BigDaddy” Infrastructure expected to be fully launched in March-April 2006) it's hard to tell if the same SEO tactics will be hot on Google in two months' time. One of the supposed changes is the decrease in weight of links. If this happens, a major difference between Yahoo! and Google will be eliminated because as of today Google places more importance on factors such as backlinks, while Yahoo! sticks more to onpage factors, like keyword density in the title, the URL, and the headings.
Of all the differences between Yahoo! and Google, the way keywords in the title and in the URL are treated is the most important. If you have the keyword in these two places, then you can expect a top 10 place in Yahoo!. But beware – a title and an URL cannot be unlimited and technically you can place no more than 3 or 4 keywords there. Also, it matters if the keyword in the title and in the URL is in a basic form or if it is a derivative – e.g. when searching for “cat”, URLs with “catwalk” will also be displayed in Yahoo! but most likely in the second 100 results, while URLs with “cat” only are quite near to the top.
Since Yahoo! is first a directory for submissions and then a search engine (with Google it's just the opposite), a site, which has the keyword in the category it is listed under, stands a better chance to be in the beginning of the search results. With Google this is not that important. For Yahoo! keywords in filenames also score well, while for Google this is not a factor of exceptional importance.
But the major difference is keyword density. The higher the density, the higher the positioning with Yahoo! But beware – some of the keyword-rich sites on Yahoo! can with no difficulty fall into the keyword-stuffed category for Google, so if you attempt to score well on Yahoo! (with keyword density above 7-8%), you risk to be banned by Google!

Yahoo! WebRank

Following Google's example, Yahoo! introduced a Web toolbar that collects anonymous statistics about which sites users browse, thus way getting an aggregated value (from 0 to 10) of how popular a given site is. The higher the value, the more popular a site is and the more valuable the backlinks from it are.
Although WebRank and positioning in the search results are not directly correlated, there is a dependency between them – sites with high WebRank tend to position higher than comparable sites with lower WebRank and the WebRanks of the top 20-30 results for a given keyword are most often above 5.00 on average.
The practical value of WebRank as a measure of success is often discussed in SEO communities and the general opinion is that this is not the most relevant metrics. However, one of the benefits of WebRank is that it alerts Yahoo! Slurp that a new page has appeared, thus inviting it to spider it, if it is not already in the Yahoo! Search index.
When Yahoo! toolbar was launched in 2004, it had an icon that showed the WebRank of the page that is currently open in the browser. Later this feature has been removed but still there are tools on the Web that allow to check the WebRank of a particular page. For instance, this tool allows to check the WebRanks of a whole bunch of pages at a time.

See Your Site With the Eyes of a Spider

Making efforts to optimize a site is great but what counts is how search engines see your efforts. While even the most careful optimization does not guarantee tops position in search results, if your site does not follow basic SEO truths, then it is more than certain that this site will not score well with search engines. One way to check in advance how your SEO efforts are seen by search engines is to use a search engine simulator.

Spiders Explained

Basically all search engine spiders function on the same principle – they crawl the Web and index pages, which are stored in a database and later use various algorithms to determine page ranking, relevancy, etc of the collected pages. While the algorithms of calculating ranking and relevancy widely differ among search engines, the way they index sites is more or less uniform and it is very important that you know what spiders are interested in and what they neglect.
Search engine spiders are robots and they do not read your pages the way a human does. Instead, they tend to see only particular stuff and are blind for many extras (Flash, JavaScript) that are intended for humans. Since spiders determine if humans will find your site, it is worth to consider what spiders like and what don't.

Flash, JavaScript, Image Text or Frames?!

Flash, JavaScript and image text are NOT visible to search engines. Frames are a real disaster in terms of SEO ranking. All of them might be great in terms of design and usability but for search engines they are absolutely wrong. An incredible mistake one can make is to have a Flash intro page (frames or no frames, this will hardly make the situation worse) with the keywords buried in the animation. Check with the Search Engine Spider Simulator tool a page with Flash and images (and preferably no text or inbound or outbound hyperlinks) and you will see that to search engines this page appears almost blank.
Running your site through this simulator will show you more than the fact that Flash and JavaScript are not SEO favorites. In a way, spiders are like text browsers and they don't see anything that is not a piece of text. So having an image with text in it means nothing to a spider and it will ignore it. A workaround (recommended as a SEO best practice) is to include meaningful description of the image in the ALT attribute of the <IMG> tag but be careful not to use too many keywords in it because you risk penalties for keyword stuffing. ALT attribute is especially essential, when you use links rather than text for links. You can use ALT text for describing what a Flash movie is about but again, be careful not to trespass the line between optimization and over-optimization.

Are Your Hyperlinks Spiderable?

The search engine spider simulator can be of great help when trying to figure out if the hyperlinks lead to the right place. For instance, link exchange websites often put fake links to your site with _javascript (using mouse over events and stuff to make the link look genuine) but actually this is not a link that search engines will see and follow. Since the spider simulator would not display such links, you'll know that something with the link is wrong.
It is highly recommended to use the <noscript> tag, as opposed to _javascript based menus. The reason is that _javascript based menus are not spiderable and all the links in them will be ignored as page text. The solution to this problem is to put all menu item links in the <noscript> tag. The <noscript> tag can hold a lot but please avoid using it for link stuffing or any other kind of SEO manipulation.
If you happen to have tons of hyperlinks on your pages (although it is highly recommended to have less than 100 hyperlinks on a page), then you might have hard times checking if they are OK. For instance, if you have pages that display “403 Forbidden”, “404 Page Not Found” or similar errors that prevent the spider from accessing the page, then it is certain that this page will not be indexed. It is necessary to mention that a spider simulator does not deal with 403 and 404 errors because it is checking where links lead to not if the target of the link is in place, so you need to use other tools for checking if the targets of hyperlinks are the intended ones.

Looking for Your Keywords

While there are specific tools, like the Keyword Playground or the Website Keyword Suggestions, which deal with keywords in more detail, search engine spider simulators also help to see with the eyes of a spider where keywords are located among the text of the page. Why is this important? Because keywords in the first paragraphs of a page weigh more than keywords in the middle or at the end. And if keywords visually appear to us to be on the top, this may not be the way spiders see them. Consider a standard Web page with tables. In this case chronologically the code that describes the page layout (like navigation links or separate cells with text that are the same sitewise) might come first and what is worse, can be so long that the actual page-specific content will be screens away from the top of the page. When we look at the page in a browser, to us everything is fine – the page-specific content is on top but since in the HTML code this is just the opposite, the page will not be noticed as keyword-rich.

Are Dynamic Pages Too Dynamic to be Seen At All

Dynamic pages (especially ones with question marks in the URL) are also an extra that spiders do not love, although many search engines do index dynamic pages as well. Running the spider simulator will give you an idea how well your dynamic pages are accepted by search engines. Useful suggestions how to deal with search engines and dynamic URLs can be found in the Dynamic URLs vs. Static URLs article.

Meta Keywords and Meta Description

Meta keywords and meta description, as the name implies, are to be found in the <META> tag of a HTML page. Once meta keywords and meta descriptions were the single most important criterion for determining relevance of a page but now search engines employ alternative mechanisms for determining relevancy, so you can safely skip listing keywords and description in Meta tags (unless you want to add there instructions for the spider what to index and what not but apart from that meta tags are not very useful anymore).

Optimization, Over-Optimization or SEO Overkill?

The fight to top search engines' results knows no limits – neither ethical, nor technical. There are often reports of sites that have been temporarily or permanently excluded from Google and the other search engines because of malpractice and using “black hat” SEO optimization techniques. The reaction of search engines is easy to understand – with so many tricks and cheats that SEO experts include in their arsenal, the relevancy of returned results is seriously compromised to the point where search engines start to deliver completely irrelevant and manipulated search results. And even if search engines do not discover your scams right away, your competitors might report you.

Keyword Density or Keyword Stuffing?

Sometimes SEO experts go too far in their desire to push their clients' sites to top positions and resort to questionable practices, like keyword stuffing. Keyword stuffing is considered an unethical practice because what you actually do is use the keyword in question throughout the text suspiciously often. Having in mind that the recommended keyword density is from 3 to 7%, anything above this, say 10% density starts to look very much like keyword stuffing and it is likely that will not get unnoticed by search engines. A text with 10% keyword density can hardly make sense, if read by a human. Some time ago Google implemented the so called “Florida Update” and essentially imposed a penalty for pages that are keyword-stuffed and over-optimized in general.
Generally, keyword density in the title, the headings, and the first paragraphs matters more. Needless to say that you should be especially careful not to stuff these areas. Try the Keyword Density Cloud tool to check if your keyword density is in the acceptable limits, especially in the above-mentioned places. If you have a high density percentage for a frequently used keyword, then consider replacing some of the occurrences of the keyword with synonyms. Also, generally words that are in bold and/or italic are considered important by search engines but if any occurrence of the target keywords is in bold and italic, this also looks unnatural and in the best case it will not push your page up.

Doorway Pages and Hidden Text

Another common keyword scam is doorway pages. Before Google introduced the PageRank algorithm, doorways were a common practice and there were times when they were not considered an illegal optimization. A doorway page is a page that is made especially for the search engines and that has no meaning for humans but is used to get high positions in search engines and to trick users to come to the site. Although keywords are still very important, today keywords alone have less effect in determining the position of a site in search results, so doorway pages do not get so much traffic to a site but if you use them, don't ask why Google punished you.
Very similar to doorway pages was a scam called hidden text. This is text, which is invisible to humans (e.g. the text color is the same as the page background) but is included in the HTML source of the page, trying to fool search engines that the particular page is keyword-rich. Needless to say, both doorway pages and hidden text can hardly be qualified as optimization techniques, there are more manipulation than everything else.

Duplicate Content

It is a basic SEO rule that content is king. But not duplicate content. In terms of Google, duplicate content means text that is the same as the text on a different page on the SAME site (or on a sister-site, or on a site that is heavily linked to the site in question and it can be presumed that the two sites are related) – i.e. when you copy and paste the same paragraphs from one page on your site to another, then you might expect to see your site's rank drop. Most SEO experts believe that syndicated content is not treated as duplicate content and there are many examples of this. If syndicated content were duplicate content, that the sites of news agencies would have been the first to drop out of search results. Still, it does not hurt to check from time if your site has duplicate content with another, at least because somebody might be illegally copying your content and you do not know. The guys from Blackwood Productions have told me about cases like this, so as incredible as it might seen, content theft is not that uncommon. The Similar Page Checker tool will help you see if you have grounds to worry about duplicate content.

Links Spam

Links are another major SEO tool and like the other SEO tools it can be used or misused. While backlinks are certainly important (for Yahoo backlinks are important as quantity, while for Google it is more important what sites backlinks come from), getting tons of backlinks from a link farm or a blacklisted site is begging to be penalized. Also, if outbound links (links from your site to other sites) considerably outnumber your inbound links (links from other sites to your site), then you have put too much effort in creating useless links because this will not improve your ranking. You can use the Domain Stats Tool to see the number of backlinks (inbound links) to your site and the Site Link Analyzer to see how many outbound links you have.
Using keywords in links (the anchor text), domain names, folder and file names does boost your search engine rankings but again, the precise measure is the boundary between topping the search results and being kicked out of them. For instance, if you are optimizing for the keyword “cat”, which is a frequently chosen keyword and as with all popular keywords and phrases, competition is fierce, you might not see other alternative for reaching the top but getting a domain name like http://www.cat-cats-kittens-kitty.com, which no doubt is packed with keywords to the maximum but is first – difficult to remember, and second – if the contents does not correspond to the plenitude of cats in the domain name, you will never top the search results.
Although file and folder names are less important than domain names, now and then (but definitely not all the time) you can include “cat” (and synonyms) in them and in the anchor text of the links. This counts well, provided that anchors are not artificially stuffed (for instance if you use “cat_cats_kitten” as anchor for internal site links this anchor certainly is stuffed). While you have no control over third sides that link to you and use anchors that you don't like, it is up to you to perform periodic checks what anchors do other sites use to link to you. A handy tool for this task is the Backlink Anchor Text Analysis, where you enter the URL and get a listing of the sites that link to you and the anchor text they use.
Finally, to Google and the other search engines it makes no difference if a site is intentionally over-optimized to cheat them or over-optimization is the result of good intentions, so no matter what your motives are, always try to keep to reasonable practices and remember that do not overstep the line.

Ranking in Country Specific Search Engines

In the world of Search Engine Optimization, Location is important. Search engines like to bring relevant results to a user, not only in the area of keywords and sites that give the user exactly what they are looking for, but also in the correct language as well. It doesn't do a lot of good for a Russian-speaking individual to continually get websites returned in a search query that are written in Egyptian or in Chinese. So a search engine has to have some way to be able to return the results the user is looking for in the right language, and a search engine's goal is also to try and get the user as close to home as possible in the realm of their search results.
Many people wonder why their websites don't rank well in some search engines, especially if they are trying to get ranked in a search engine based in another country. Perhaps they may not even know they are in another country? You say that is impossible: how could one not know what country they are in? It might surprise that individual to find that their website might in fact be hosted in a completely different country, perhaps even on another continent!
Consider that many search engines, including Google, will determine country not only based on the domain name (like .co.uk or .com.au), but also the country of a website's physical location based upon IP address. Search engines are programmed with information that tells them which IP addresses belong to which particular country, as well as which domain suffixes are assigned to which countries.
Let's say, for instance, that you are wishing to rank highly in Google based in the United States. It would not do well, then, for you to have your website hosted in Japan or Australia. You might have to switch your web host to one whose servers reside in the United States.
There is a tool we like to use called the Website to Country Tool. What this tool does is it allows you to view which country your website is hosted. Not only will this tell you what country your site is hosted in, but it can also help you determine a possible reason why your website may not be ranking as highly as you might like in a particular search engine.
It might be disheartening to learn that your website has been hosted in another country, but it is better to understand why your site might not be ranking as highly as you'd like it to be, especially when there is something you can definitely do about it.

The Age of a Domain Name

One of the many factors in Google's search engine algorithm is the age of a domain name. In a small way, the age of a domain gives the appearance of longevity and therefore a higher relevancy score in Google.
Driven by spam sites which pop up and die off quickly, the age of the domain is usually a sign whether or not a site is yesterday's news or tomorrow's popular site. We see this in the world of business, for example. While the novelty that may go with a new store in town brings a short burst of initial business, people tend to trust a business that has been around for a long time over one that is brand new. The same is true for websites. Or, as Rob from BlackwoodProductions.com says, "Rent the store (i.e. register the domain) before you open for business".
Two things that are considered in the age of a domain name are:
  • The age of the website
  • The length of time a domain has been registered
The age of the website is built up of how long the content has been actually on the web, how long the site has been in promotion, and even the last time content was updated. The length of time a domain has been registered is measured by not only the actual date the domain was registered, but also how long it is registered for. Some domains only register for a year at a time, while others are registered for two, five, or even ten years.
In the latest Google update that SEOs call the Jagger Update, some of the big changes seen were the importance given to age; age of incoming links, age of web content, and the date the domain was registered. There were many things, in reality, that were changed in this last update, but since we're talking about the age of a domain, we'll only deal with those issues specifically. We'll talk more in other articles about other factors you will want to be aware of that Google changed in their evaluation criteria of websites on the Internet.
One of the ways Google uses to minimize search engine spam is by giving new websites a waiting period of three to four months before giving it any kind of PageRank. This is referred to as the "sandbox effect". It's called the "sandbox effect" because it has been said that Google wants to see if those sites are serious about staying around on the web. The sandbox analogy comes from the concept that Google does this by throwing all of the new sites into a sandbox and let them play together, away from all the adults. Then, when those new sites "grow up", so to speak, then they are allowed to be categorized with the "adults", or the websites that aren't considered new.
What does this mean to you? For those of you with new websites, you may be disappointed in this news, but don't worry. There are some things you can do while waiting for the sandbox period to expire, such as concentrating on your backlink strategies, promoting your site through Pay-per-click, articles, RSS feeds, or in other ways. Many times, if you spend this sandbox period wisely, you'll be ready for Google when it does finally assign you a PageRank, and you could find yourself starting out with a great PageRank!
Even though the domain's age is a factor, critics believe it only gets a little weight in the algorithm. Since the age of your domain is something you have no control over, it doesn't necessarily mean that your site isn't going to rank well in the Search Engine Results Pages (SERPs). It does mean, however, that you will have to work harder in order to build up your site popularity and concentrate on factors that you can control, link inbound links and the type of content you present on your website.
So what happens if you change your domain name? Does this mean you're going to get a low grade with a search engine if you have a new site? No, not necessarily. There are a few things you can do to help ensure that your site won't get lost in the SERPs because of the age of the domain.
1. Make sure you register your domain name for the longest amount of time possible. Many registrars allow you to register a domain name for as long as five years, and some even longer. Registering your domain for a longer period of time gives an indication that your site intends to be around for a long time, and isn't going to just disappear after a few months. This will help boost your score with regards to your domain's age.
2. Consider registering a domain name even before you are sure you're going to need it. We see many domains out there that even while they are registered; they don't have a website to go with it. This could mean that the site is in development, or simply someone saw the use of that particular domain name, and wanted to snatch it up before someone else did. There doesn't seem to be any problems with this method so far, so it certainly can't hurt you to buy a domain name you think could be catchy, even if you end up just selling it later on.
3. Think about purchasing a domain name that was already pre-owned. Not only will this allow you to avoid the "sandbox effect" of a new website in Google, but it also allows you to keep whatever PageRank may have already been attributed to the domain. Be aware that most pre-owned domains with PageRank aren't as cheaply had as a new domain, but it might be well worth it to you to invest a bit more money right at the start.
4. Keep track of your domain's age. One of the ways you can determine the age of a domain is with this handy Domain Age Tool. What it does is allows you to view the approximate age of a website on the Internet, which can be very helpful in determining what kind of edge your competitors might have over you, and even what a site might have looked like when it first started.
To use it, simply type in the URL of your domain and the URLs of your competitors, and click submit. This will give you the age of the domains and other interesting information, like anything that had been cached from the site initially. This could be especially helpful if you are purchasing a pre-owned domain.
Because trustworthy sites are going to have to be the wave of the future, factoring in the age of a domain is a good idea. Even though a site that may have been around for years may suddenly go belly-up, or the next big eBay or Yahoo! just might be getting it start, it may not be a full measure of how trustworthy a site is or will be. This is why there are many other factors that weigh into a search engine's algorithm and not just a single factor alone. What we do know is that we've seen age becoming of more importance that it had been previously, there are only good things to be said about having a site that's been around for a while.

The Importance of Backlinks

If you've read anything about or studied Search Engine Optimization, you've come across the term "backlink" at least once. For those of you new to SEO, you may be wondering what a backlink is, and why they are important. Backlinks have become so important to the scope of Search Engine Optimization, that they have become some of the main building blocks to good SEO. In this article, we will explain to you what a backlink is, why they are important, and what you can do to help gain them while avoiding getting into trouble with the Search Engines.
What are "backlinks"? Backlinks are links that are directed towards your website. Also knows as Inbound links (IBL's). The number of backlinks is an indication of the popularity or importance of that website. Backlinks are important for SEO because some search engines, especially Google, will give more credit to websites that have a good number of quality backlinks, and consider those websites more relevant than others in their results pages for a search query.
When search engines calculate the relevance of a site to a keyword, they consider the number of QUALITY inbound links to that site. So we should not be satisfied with merely getting inbound links, it is the quality of the inbound link that matters.
A search engine considers the content of the sites to determine the QUALITY of a link. When inbound links to your site come from other sites, and those sites have content related to your site, these inbound links are considered more relevant to your site. If inbound links are found on sites with unrelated content, they are considered less relevant. The higher the relevance of inbound links, the greater their quality.
For example, if a webmaster has a website about how to rescue orphaned kittens, and received a backlink from another website about kittens, then that would be more relevant in a search engine's assessment than say a link from a site about car racing. The more relevant the site is that is linking back to your website, the better the quality of the backlink.
Search engines want websites to have a level playing field, and look for natural links built slowly over time. While it is fairly easy to manipulate links on a web page to try to achieve a higher ranking, it is a lot harder to influence a search engine with external backlinks from other websites. This is also a reason why backlinks factor in so highly into a search engine's algorithm. Lately, however, a search engine's criteria for quality inbound links has gotten even tougher, thanks to unscrupulous webmasters trying to achieve these inbound links by deceptive or sneaky techniques, such as with hidden links, or automatically generated pages whose sole purpose is to provide inbound links to websites. These pages are called link farms, and they are not only disregarded by search engines, but linking to a link farm could get your site banned entirely.
Another reason to achieve quality backlinks is to entice visitors to come to your website. You can't build a website, and then expect that people will find your website without pointing the way. You will probably have to get the word out there about your site. One way webmasters got the word out used to be through reciprocal linking. Let's talk about reciprocal linking for a moment.
There is much discussion in these last few months about reciprocal linking. In the last Google update, reciprocal links were one of the targets of the search engine's latest filter. Many webmasters had agreed upon reciprocal link exchanges, in order to boost their site's rankings with the sheer number of inbound links. In a link exchange, one webmaster places a link on his website that points to another webmasters website, and vice versa. Many of these links were simply not relevant, and were just discounted. So while the irrelevant inbound link was ignored, the outbound links still got counted, diluting the relevancy score of many websites. This caused a great many websites to drop off the Google map.
We must be careful with our reciprocal links. There is a Google patent in the works that will deal with not only the popularity of the sites being linked to, but also how trustworthy a site is that you link to from your own website. This will mean that you could get into trouble with the search engine just for linking to a bad apple. We could begin preparing for this future change in the search engine algorithm by being choosier with which we exchange links right now. By choosing only relevant sites to link with, and sites that don't have tons of outbound links on a page, or sites that don't practice black-hat SEO techniques, we will have a better chance that our reciprocal links won't be discounted.
Many webmasters have more than one website. Sometimes these websites are related, sometimes they are not. You have to also be careful about interlinking multiple websites on the same IP. If you own seven related websites, then a link to each of those websites on a page could hurt you, as it may look like to a search engine that you are trying to do something fishy. Many webmasters have tried to manipulate backlinks in this way; and too many links to sites with the same IP address is referred to as backlink bombing.
One thing is certain: interlinking sites doesn't help you from a search engine standpoint. The only reason you may want to interlink your sites in the first place might be to provide your visitors with extra resources to visit. In this case, it would probably be okay to provide visitors with a link to another of your websites, but try to keep many instances of linking to the same IP address to a bare minimum. One or two links on a page here and there probably won't hurt you.
There are a few things to consider when beginning your backlink building campaign. It is helpful to keep track of your backlinks, to know which sites are linking back to you, and how the anchor text of the backlink incorporates keywords relating to your site. A tool to help you keep track of your backlinks is the Domain Stats Tool. This tool displays the backlinks of a domain in Google, Yahoo, and MSN. It will also tell you a few other details about your website, like your listings in the Open Directory, or DMOZ, from which Google regards backlinks highly important; Alexa traffic rank, and how many pages from your site that have been indexed, to name just a few.
Another tool to help you with your link building campaign is the Backlink Builder Tool. It is not enough just to have a large number of inbound links pointing to your site. Rather, you need to have a large number of QUALITY inbound links. This tool searches for websites that have a related theme to your website which are likely to add your link to their website. You specify a particular keyword or keyword phrase, and then the tool seeks out related sites for you. This helps to simplify your backlink building efforts by helping you create quality, relevant backlinks to your site, and making the job easier in the process.
There is another way to gain quality backlinks to your site, in addition to related site themes: anchor text. When a link incorporates a keyword into the text of the hyperlink, we call this quality anchor text. A link's anchor text may be one of the under-estimated resources a webmaster has. Instead of using words like "click here" which probably won't relate in any way to your website, using the words "Please visit our tips page for how to nurse an orphaned kitten" is a far better way to utilize a hyperlink. A good tool for helping you find your backlinks and what text is being used to link to your site is the Backlink Anchor Text Analysis Tool. If you find that your site is being linked to from another website, but the anchor text is not being utilized properly, you should request that the website change the anchor text to something incorporating relevant keywords. This will also help boost your quality backlinks score.
Building quality backlinks is extremely important to Search Engine Optimization, and because of their importance, it should be very high on your priority list in your SEO efforts. We hope you have a better understanding of why you need good quality inbound links to your site, and have a handle on a few helpful tools to gain those links.

Dynamic URLs vs. Static URLs

The Issue at Hand
Websites that utilize databases which can insert content into a webpage by way of a dynamic script like PHP or JavaScript are increasingly popular. This type of site is considered dynamic. Many websites choose dynamic content over static content. This is because if a website has thousands of products or pages, writing or updating each static by hand is a monumental task.
There are two types of URLs: dynamic and static. A dynamic URL is a page address that results from the search of a database-driven web site or the URL of a web site that runs a script. In contrast to static URLs, in which the contents of the web page stay the same unless the changes are hard-coded into the HTML, dynamic URLs are generated from specific queries to a site's database. The dynamic page is basically only a template in which to display the results of the database query. Instead of changing information in the HTML code, the data is changed in the database.
But there is a risk when using dynamic URLs: search engines don't like them. For those at most risk of losing search engine positioning due to dynamic URLs are e-commerce stores, forums, sites utilizing content management systems and blogs like Mambo or WordPress, or any other database-driven website. Many times the URL that is generated for the content in a dynamic site looks something like this:

   http://www.somesites.com/forums/thread.php?threadid=12345&sort=date

A static URL on the other hand, is a URL that doesn't change, and doesn't have variable strings. It looks like this:

   http://www.somesites.com/forums/the-challenges-of-dynamic-urls.htm
Static URLs are typically ranked better in search engine results pages, and they are indexed more quickly than dynamic URLs, if dynamic URLs get indexed at all. Static URLs are also easier for the end-user to view and understand what the page is about. If a user sees a URL in a search engine query that matches the title and description, they are more likely to click on that URL than one that doesn't make sense to them.
A search engine wants to only list pages its index that are unique. Search engines decide to combat this issue by cutting off the URLs after a specific number of variable strings (e.g.: ? & =).
For example, let's look at three URLs:

   http://www.somesites.com/forums/thread.php?threadid=12345&sort=date
   http://www.somesites.com/forums/thread.php?threadid=67890&sort=date
   http://www.somesites.com/forums/thread.php?threadid=13579&sort=date

All three of these URLs point to three different pages. But if the search engine purges the information after the first offending character, the question mark (?), now all three pages look the same:

   http://www.somesites.com/forums/thread.php
   http://www.somesites.com/forums/thread.php
   http://www.somesites.com/forums/thread.php

Now, you don't have unique pages, and consequently, the duplicate URLs won't be indexed.
Another issue is that dynamic pages generally do not have any keywords in the URL. It is very important to have keyword rich URLs. Highly relevant keywords should appear in the domain name or the page URL. This became clear in a recent study on how the top three search engines, Google, Yahoo, and MSN, rank websites.
The study involved taking hundreds of highly competitive keyword queries, like travel, cars, and computer software, and comparing factors involving the top ten results. The statistics show that of those top ten, Google has 40-50% of those with the keyword either in the URL or the domain; Yahoo shows 60%; and MSN has an astonishing 85%! What that means is that to these search engines, having your keywords in your URL or domain name could mean the difference between a top ten ranking, and a ranking far down in the results pages.
The Solution
So what can you do about this difficult problem? You certainly don't want to have to go back and recode every single dynamic URL into a static URL. This would be too much work for any website owner.
If you are hosted on a Linux server, then you will want to make the most of the Apache Mod Rewrite Rule, which is gives you the ability to inconspicuously redirect one URL to another, without the user's (or a search engine's) knowledge. You will need to have this module installed in Apache; for more information, you can view the documentation for this module here. This module saves you from having to rewrite your static URLs manually.
How does this module work? When a request comes in to a server for the new static URL, the Apache module redirects the URL internally to the old, dynamic URL, while still looking like the new static URL. The web server compares the URL requested by the client with the search pattern in the individual rules.
For example, when someone requests this URL:
   http://www.somesites.com/forums/the-challenges-of-dynamic-urls.html

The server looks for and compares this static-looking URL to what information is listed in the .htaccess file, such as:

   RewriteEngine on
   RewriteRule thread-threadid-(.*)\.htm$ thread.php?threadid=$1

It then converts the static URL to the old dynamic URL that looks like this, with no one the wiser:
   http://www.somesites.com/forums/thread.php?threadid=12345
You now have a URL that only will rank better in the search engines, but your end-users can definitely understand by glancing at the URL what the page will be about, while allowing Apache's Mod Rewrite Rule to handle to conversion for you, and still keeping the dynamic URL.
If you are not particularly technical, you may not wish to attempt to figure out the complex Mod Rewrite code and how to use it, or you simply may not have the time to embark upon a new learning curve. Therefore, it would be extremely beneficial to have something to do it for you. This URL Rewriting Tool can definitely help you. What this tool does is implement the Mod Rewrite Rule in your .htaccess file to secretly convert a URL to another, such as with dynamic and static ones.
With the URL Rewriting Tool, you can opt to rewrite single pages or entire directories. Simply enter the URL into the box, press submit, and copy and paste the generated code into your .htaccess file on the root of your website. You must remember to place any additional rewrite commands in your .htaccess file for each dynamic URL you want Apache to rewrite. Now, you can give out the static URL links on your website without having to alter all of your dynamic URLs manually because you are letting the Mod Rewrite Rule do the conversion for you, without JavaScript, cloaking, or any sneaky tactics.
Another thing you must remember to do is to change all of your links in your website to the static URLs in order to avoid penalties by search engines due to having duplicate URLs. You could even add your dynamic URLs to your Robots Exclusion Standard File (robots.txt) to keep the search engines from spidering the duplicate URLs. Regardless of your methods, after using the URL Rewrite Tool, you should ideally have no links pointing to any of your old dynamic URLs.
You have multiple reasons to utilize static URLs in your website whenever possible. When it's not possible, and you need to keep your database-driven content as those old dynamic URLs, you can still give end-users and search engine a static URL to navigate, and all the while, they are still your dynamic URLs in disguise. When a search engine engineer was asked if this method was considered "cloaking", he responded that it indeed was not, and that in fact, search engines prefer you do it this way. The URL Rewrite Tool not only saves you time and energy by helping you use static URLs by converting them transparently to your dynamic URLs, but it will also save your rankings in the search engines.

Duplicate Content Filter: What it is and how it works

Duplicate Content has become a huge topic of discussion lately, thanks to the new filters that search engines have implemented. This article will help you understand why you might be caught in the filter, and ways to avoid it. We'll also show you how you can determine if your pages have duplicate content, and what to do to fix it.
Search engine spam is any deceitful attempts to deliberately trick the search engine into returning inappropriate, redundant, or poor-quality search results. Many times this behavior is seen in pages that are exact replicas of other pages which are created to receive better results in the search engine. Many people assume that creating multiple or similar copies of the same page will either increase their chances of getting listed in search engines or help them get multiple listings, due to the presence of more keywords.
In order to make a search more relevant to a user, search engines use a filter that removes the duplicate content pages from the search results, and the spam along with it. Unfortunately, good, hardworking webmasters have fallen prey to the filters imposed by the search engines that remove duplicate content. It is those webmasters who unknowingly spam the search engines, when there are some things they can do to avoid being filtered out. In order for you to truly understand the concepts you can implement to avoid the duplicate content filter, you need to know how this filter works.
First, we must understand that the term "duplicate content penalty" is actually a misnomer. When we refer to penalties in search engine rankings, we are actually talking about points that are deducted from a page in order to come to an overall relevancy score. But in reality, duplicate content pages are not penalized. Rather they are simply filtered, the way you would use a sieve to remove unwanted particles. Sometimes, "good particles" are accidentally filtered out.
Knowing the difference between the filter and the penalty, you can now understand how a search engine determines what duplicate content is. There are basically four types of duplicate content that are filtered out:
  1. Websites with Identical Pages - These pages are considered duplicate, as well as websites that are identical to another website on the Internet are also considered to be spam. Affiliate sites with the same look and feel which contain identical content, for example, are especially vulnerable to a duplicate content filter. Another example would be a website with doorway pages. Many times, these doorways are skewed versions of landing pages. However, these landing pages are identical to other landing pages. Generally, doorway pages are intended to be used to spam the search engines in order to manipulate search engine results.
  1. Scraped Content - Scraped content is taking content from a web site and repackaging it to make it look different, but in essence it is nothing more than a duplicate page. With the popularity of blogs on the internet and the syndication of those blogs, scraping is becoming more of a problem for search engines.
  1. E-Commerce Product Descriptions - Many eCommerce sites out there use the manufacturer's descriptions for the products, which hundreds or thousands of other eCommerce stores in the same competitive markets are using too. This duplicate content, while harder to spot, is still considered spam.
  1. Distribution of Articles - If you publish an article, and it gets copied and put all over the Internet, this is good, right? Not necessarily for all the sites that feature the same article. This type of duplicate content can be tricky, because even though Yahoo and MSN determine the source of the original article and deems it most relevant in search results, other search engines like Google may not, according to some experts.
So, how does a search engine's duplicate content filter work? Essentially, when a search engine robot crawls a website, it reads the pages, and stores the information in its database. Then, it compares its findings to other information it has in its database. Depending upon a few factors, such as the overall relevancy score of a website, it then determines which are duplicate content, and then filters out the pages or the websites that qualify as spam. Unfortunately, if your pages are not spam, but have enough similar content, they may still be regarded as spam.
There are several things you can do to avoid the duplicate content filter. First, you must be able to check your pages for duplicate content. Using our Similar Page Checker, you will be able to determine similarity between two pages and make them as unique as possible. By entering the URLs of two pages, this tool will compare those pages, and point out how they are similar so that you can make them unique.
Since you need to know which sites might have copied your site or pages, you will need some help. We recommend using a tool that searches for copies of your page on the Internet: www.copyscape.com. Here, you can put in your web page URL to find replicas of your page on the Internet. This can help you create unique content, or even address the issue of someone "borrowing" your content without your permission.
Let's look at the issue regarding some search engines possibly not considering the source of the original content from distributed articles. Remember, some search engines, like Google, use link popularity to determine the most relevant results. Continue to build your link popularity, while using tools like www.copyscape.com to find how many other sites have the same article, and if allowed by the author, you may be able to alter the article as to make the content unique.
If you use distributed articles for your content, consider how relevant the article is to your overall web page and then to the site as a whole. Sometimes, simply adding your own commentary to the articles can be enough to avoid the duplicate content filter; the Similar Page Checker could help you make your content unique. Further, the more relevant articles you can add to compliment the first article, the better. Search engines look at the entire web page and its relationship to the whole site, so as long as you aren't exactly copying someone's pages, you should be fine.
If you have an eCommerce site, you should write original descriptions for your products. This can be hard to do if you have many products, but it really is necessary if you wish to avoid the duplicate content filter. Here's another example why using the Similar Page Checker is a great idea. It can tell you how you can change your descriptions so as to have unique and original content for your site. This also works well for scraped content also. Many scraped content sites offer news. With the Similar Page Checker, you can easily determine where the news content is similar, and then change it to make it unique.
Do not rely on an affiliate site which is identical to other sites or create identical doorway pages. These types of behaviors are not only filtered out immediately as spam, but there is generally no comparison of the page to the site as a whole if another site or page is found as duplicate, and get your entire site in trouble.
The duplicate content filter is sometimes hard on sites that don't intend to spam the search engines. But it is ultimately up to you to help the search engines determine that your site is as unique as possible. By using the tools in this article to eliminate as much duplicate content as you can, you'll help keep your site original and fresh.

 

Seo Tips

How to Redirect a Web Page

301 Redirect

301 redirect is the most efficient and Search Engine Friendly method for webpage redirection. It's not that hard to implement and it should preserve your search engine rankings for that particular page. If you have to change file names or move pages around, it's the safest option. The code "301" is interpreted as "moved permanently".
You can Test your redirection with Search Engine Friendly Redirect Checker
Below are a Couple of methods to implement URL Redirection

IIS Redirect

  • In internet services manager, right click on the file or folder you wish to redirect
  • Select the radio titled "a redirection to a URL".
  • Enter the redirection page
  • Check "The exact url entered above" and the "A permanent redirection for this resource"
  • Click on 'Apply'

ColdFusion Redirect

<.cfheader statuscode="301" statustext="Moved permanently">
<.cfheader name="Location" value="http://www.new-url.com">

PHP Redirect

<?
Header( "HTTP/1.1 301 Moved Permanently" );
Header( "Location: http://www.new-url.com" );
?>

ASP Redirect

<%@ Language=VBScript %>
<%
Response.Status="301 Moved Permanently"
Response.AddHeader "Location","http://www.new-url.com/"
%>

ASP .NET Redirect

<script runat="server">
private void Page_Load(object sender, System.EventArgs e)
{
Response.Status = "301 Moved Permanently";
Response.AddHeader("Location","http://www.new-url.com");
}
</script>

JSP (Java) Redirect

<%
response.setStatus(301);
response.setHeader( "Location", "http://www.new-url.com/" );
response.setHeader( "Connection", "close" );
%>

CGI PERL Redirect

$q = new CGI;
print $q->redirect("http://www.new-url.com/");

Ruby on Rails Redirect

def old_action
headers["Status"] = "301 Moved Permanently"
redirect_to "http://www.new-url.com/"
end

Redirect Old domain to New domain (htaccess redirect)

Create a .htaccess file with the below code, it will ensure that all your directories and pages of your old domain will get correctly redirected to your new domain.
The .htaccess file needs to be placed in the root directory of your old website (i.e the same directory where your index file is placed)
Options +FollowSymLinks
RewriteEngine on
RewriteRule (.*) http://www.newdomain.com/$1 [R=301,L]
Please REPLACE www.newdomain.com in the above code with your actual domain name.
In addition to the redirect I would suggest that you contact every backlinking site to modify their backlink to point to your new website.
Note* This .htaccess method of redirection works ONLY on Linux servers having the Apache Mod-Rewrite moduled enabled.

Redirect to www (htaccess redirect)

Create a .htaccess file with the below code, it will ensure that all requests coming in to domain.com will get redirected to www.domain.com
The .htaccess file needs to be placed in the root directory of your old website (i.e the same directory where your index file is placed)
Options +FollowSymlinks
RewriteEngine on
rewritecond %{http_host} ^domain.com [nc]
rewriterule ^(.*)$ http://www.domain.com/$1 [r=301,nc]
Please REPLACE domain.com and www.newdomain.com with your actual domain name.
Note* This .htaccess method of redirection works ONLY on Linux servers having the Apache Mod-Rewrite moduled enabled.

How to Redirect HTML

Please refer to section titled 'How to Redirect with htaccess', if your site is hosted on a Linux Server and 'IIS Redirect', if your site is hosted on a Windows Server.

Web Hosting

NOWHERE to be found on Search Engines ?

If you have submitted your site again and again to Search Engines but you are still unable to get it indexed, it MAY be your Web hosting provider who is responsible.
If the domain sharing your server and IP Address is penalized by a Search Engine on account of spamming then your website is also expected to be banned or penalized. This will happen if there is virtual shared IP hosting.
Another situation may arise if your website is residing on a server containing illegal adult content, and that site is on the black list of Search Engines your website will be banned. While you should never assume that your site is blacklisted, you should also take precautions. Even if your site is dismissed from the rankings temporarily, you can probably clear up the mistake by contacting the Search Engine.
To be safe its best to host with a reputed Web Hosting Provider.
Your Web Hosting provider should be up 24/7. For proper website service it must be up 100% of the time so that Users and Search Engines aren't faced with a blank page or a 404 error. Search engines have no specific schedules for crawling, so your website must be up at all times in order to maintain your Search Engine Ranking.

Reputed Web Hosting Provider

The Reputation of the web host should be good with Search Engines.
Although there is no clear way to identify them, web hosts that are very popular could be trusted.
Some of the popular SEO Friendly Hosting providers are
  • Directi
  • Interland
  • 1and1
  • Enom etc.
Sites hosted by free web hosting providers do not usually rank well in Search Engines for competitive keywords.

Conclusion

You may benefit from sticking with the most popular hosting service providers, even if they are a little more expensive, rather than going for the best deal of a host with no reputation.
It is also preferable to have an individual IP address so that risk of index removal is minimized (Usually Hosting providers provide dedicated IP Addresses only with purchase of SSL).

Page Layout Ideas

With some basic HTML tweaking webpages can be made more Search Engine Friendly i.e more relevant in the eyes of a Search Engine, this articles teaches you how to do just that.
Suppose you have a website with 2 sections,
1. The Left Section : Which usually contains menu items, ads, some promotional offers etc.
2. The Main Content Area : Which contains the main textual content of the website.
Below is the basic template.
LEFT SECTION
ITEM1
ITEM2
ITEM3
Main Body


HTML used for the sample above.

<table width="80%" border="0" cellspacing="0" cellpadding="0">
<tr>
   <td>
    <b>LEFT SECTION</b><br>
     ITEM1<br>
     ITEM2<br>
     ITEM3<br>
   </td>
   <td bgcolor="#D9BBBB" rowspan="2" valign="top">
          Main Body
   </td>
</tr>
</table>

Notice, the content of The Left Section appears above The Main Content Area in the source, a Search Engine may thus give more importance / preference to left section of your webpage then your actual content area.
The above structure can be improved for search engine optimization purposes.
The 'Rowspan' attribute of the <td> can be used to solve the above problem. Using rowspan=2 divide the page into two rows, put The Main Content text in the first row, above the Left section, which should be put in the second row.
Optimized HTML Structure

<table width="80%" border="0" cellspacing="0" cellpadding="0">
<tr>
   <td height="1">
   </td>
   <td bgcolor="#D9BBBB" rowspan="2" valign="top">
          Main Body
   </td>
</tr>
<tr>
   <td>
    <b>LEFT SECTION</b><br>
     ITEM1<br>
     ITEM2<br>
     ITEM3<br>
   </td>
</tr>
</table>

Optimized Output

Main Body
LEFT SECTION
ITEM1
ITEM2
ITEM3


Stop Words

Most Search Engines do not consider extremely common words in order to save disk space or to speed up search results. These filtered words are known as 'Stop Words'.
Below is a comprehensive list of words Ignored by Search Engines.













Post a Comment

0 Comments