Biyernes, Hulyo 15, 2011

act 4

A)1.what are the advantages and disadvantages of using search engines?
disadvantages
  1. Changes must be made to your website's code. Normally the changes are invisible to visitors. However, if you have invested heavily in a search engine-unfriendly site, the process can be time-consuming and costly; and occasionally significant changes may need to be made to your site's copy, navigation or design. Of course ultimately, you'll see returns if you commit to the neccessary changes.
  2. Results (rankings and traffic) start slowly. You will normally see results within 3-4 months.
  3. There can be no guarantee. As the search engines themselves have the final say, you can't predict how many rankings you'll get for a particular search term or engine; nor can you predict how much traffic you'll get to your site.
advantages

  1. Pay per click (PPC) advertising programs are fast to implement. It usually takes two to three weeks to set up and run. Google AdWords are up-and-running as soon as you start the campaign, and Overture listings are live within 3-5 business days (after an editor reviews them).
  2. Nothing has to change on your web site, although I would recommend you create targeted landing pages for each advertisement as they've been proven to increase conversions (but that's another subject for another time!).
  3. There is no limit to the number of terms or keyword phases you can bid on.
  4. PPC is good if you intend to run promotions through your site, as you can turn the PPC campaign on and off whenever you choose.
  5. You can dictate where the listing appears on the result's page (within the sponsors' ads area) and determine what the ad says.
  6. It's very easy to test all your different search terms and offers etc, and to measure the results.

2.compare and contrast individual search engines and search meta search engines?
=Finding information on the Internet can be intimidating. The use of search engines can help users of the World Wide Web find subjects easily. We tend to call most search sites "search engines," not all of them are search engines, per se. The differences between search engines and search directories lies in the way they find and categorize sites. We can compare and contrast major search sites and sample some popular meta searches.

B)1.When is it appropriate to use a search engimes?
when is it appropriate a search/subject directory?
=Search engines use automated software programs knows as spiders or bots to survey the Web and build their databases. Web documents are retrieved by these programs and analyzed.  Data collected from each web page are then added to the search engine index. When you enter a query at a search engine site, your input is checked against the search engine's index of all the web pages it has analyzed.  The best urls are then returned to you as hits, ranked in order with the best results at the top.
=Subject directories are useful when you want to know more on broad-based subjects, such as:
  • General topics
  • Popular topics
  • Specialized Directories
  • Current events
  • Product information
Let us consider each of these in turn, but first some words about the strengths of subject directories. They are organized and they are selective. When you are not sure of the exact term to search for, browsing a subject directory's subject categories will help you find those keywords. This is actually quite useful for preparing certain search engine searches. Browsing for keywords will also provide a context for your search. Remember: subject directories are usually smaller than search engines, have much more focused and higher quality links, but are poor for exhaustive searching.
D)1. what is an invisible web or deep web?
The "visible web" is what you can find using general web search engines. It's also what you see in almost all subject directories. The "invisible web" is what you cannot find using these types of tools.
The first version of this web page was written in 2000, when this topic was new and baffling to many web searchers. Since then, search engines' crawlers and indexing programs have overcome many of the technical barriers that made it impossible for them to find "invisible" web pages.
These types of pages used to be invisible but can now be found in most search engine results:
  • Pages in non-HTML formats (pdf, Word, Excel, PowerPoint), now converted into HTML.
  • Script-based pages, whose URLs contain a ? or other script coding.
  • Pages generated dynamically by other types of database software (e.g., Active Server Pages, Cold Fusion). These can be indexed if there is a stable URL somewhere that search engine crawlers can find.
 2. how do you find an invisible web?
=Simply think "databases" and keep your eyes open. You can find searchable databases containing invisible web pages in the course of routine searching in most general web directories. Of particular value in academic research are:
Use Google and other search engines to locate searchable databases by searching a subject term and the word "database". If the database uses the word database in its own pages, you are likely to find it in Google. The word "database" is also useful in searching a topic in the Google Directory or the Yahoo! directory, because they sometimes use the term to describe searchable databases in their listings.
3.why are these wed pages not invisible is search engines or subject directories?
=There are still some hurdles search engine crawlers cannot leap. Here are some examples of material that remains hidden from general search engines:
  • The Contents of Searchable Databases. When you search in a library catalog, article database, statistical database, etc., the results are generated "on the fly" in answer to your search. Because the crawler programs cannot type or think, they cannot enter passwords on a login screen or keywords in a search box. Thus, these databases must be searched separately.
    • A special case: Google Scholar is part of the public or visible web. It contains citations to journal articles and other publications, with links to publishers or other sources where one can try to access the full text of the items. This is convenient, but results in Google Scholar are only a small fraction of all the scholarly publications that exist online. Much more - including most of the full text - is available through article databases that are part of the invisible web. The UC Berkeley Library subscribes to over 200 of these, accessible to our students, faculty, staff, and on-campus visitors through our Find Articles page.
    •  
  • Excluded Pages. Search engine companies exclude some types of pages by policy, to avoid cluttering their databases with unwanted content.
    • Dynamically generated pages of little value beyond single use. Think of the billions of possible web pages generated by searches for books in library catalogs, public-record databases, etc. Each of these is created in response to a specific need. Search engines do not want all these pages in their web databases, since they generally are not of broad interest.

    • Pages deliberately excluded by their owners. A web page creator who does not want his/her page showing up in search engines can insert special "meta tags" that will not display on the screen, but will cause most search engines' crawlers to avoid the page.


Walang komento:

Mag-post ng isang Komento