Compare scientific websites with a new Google Trends layer!

I always had the feeling that the Natureplex (the web division of the Nature Publishing Group headed by Timo Hannay) is ahead of most scientific journal publishing conglomerate’s similar departments. Now with the help of a new Google Trends layer that compares websites in terms of traffic this impression was confirmed again without strict numbers. I hope that more and more scientific journals gain incentives finally to experiment with new web technologies. Also a quick look to the Regions comparison on the bottom left helps you give up the history based conclusion that Science is the number 1 on the web in the US compared to Nature while Nature is so UK and Europe centric.

“Today, we add a new layer to Trends with Google Trends for Websites, a fun tool that gives you a view of how popular your favorite websites are, including your own! It also compares and ranks site visitation across geographies, and related websites and searches”

Source: Official Google Webmaster Central Blog via Webmonkey

The same comparison with Alexa:

6 thoughts on “Compare scientific websites with a new Google Trends layer!

  1. Thanks, Attila. Everyone stops reading for Christmas, I see! And I wonder what Cell published in Sept 07 to cause that spike?
    Fascinating, thanks for creating this.

  2. Soft-updates guarantees that the only filesystem inconsistencies on unclean shutdown are leaked blocks and inodes. To resolve this you can run a background fsck or you can ignore it until you start to run out of space. We also could’ve written a mark and sweep garbage collector but never did. Ultimately, the bgfsck is too expensive and people did not like the uncertainty of not having run fsck. To resolve these issues, I have added a small journal to softupdates. However, I only have to journal block allocation and free, and inode link count changes since softdep guarantees the rest. My journal records are each only 32bytes which is incredibly compact compared to any other journaling solution. We still get the great concurrency and ability to ignore writes which have been canceled by new operations. But now we have recovery time that is around 2 seconds per megabyte of journal in-use. That’s 32,768 blocks allocated, files created, links added, etc. per megabyte of journal.

  3. Pingback: Death of the Euro

  4. Pingback: okaasia

  5. Pingback: thuc pham chuc nang vision

  6. Pingback: Goa Escorts

Comments are closed.