Understanding Aging Conference on FriendFeed!

The “Understanding Aging: Biomedical and Bioengineering Approaches” conference will be held from June 27-29, 2008 at UCLA organized by Aubrey de Grey, Irina Conboy and Amy Wagers. I like to call it UndertsEnding Aging in myself and I am excited to go to LA and meet new people also people from SENS3.

Yesterday I created a FriendFeed room for the conference as it seems to be a perfect place of live microblogging the conference, sharing any kind of links, videos, comments, feeds and feedbacks. Working on aging and the postponement of it (you can bravely say life extension) is always a pioneering work so it’s time to use pioneering web apps for that purpose, just like FriendFeed.

Aubrey de Grey, Kevin Perrott and Kevin Dewalt have already joined the room. What about you? See you on FriendFeed, see you on LA!

3 thoughts on “Understanding Aging Conference on FriendFeed!

  1. I would like to start an investment oriented company investing peoples money (less my commissions) in the technologies that will enable longevity through regenerative medicine – and thus I’ll be able to afford such medicine as soon as it arrives – and an investment proposition it would not be too hard to sell.

    Stocks would go up like microsoft.

    But will those with a vested interest in death ever allow such medical advances to become public?

  2. Soft-updates guarantees that the only filesystem inconsistencies on unclean shutdown are leaked blocks and inodes. To resolve this you can run a background fsck or you can ignore it until you start to run out of space. We also could’ve written a mark and sweep garbage collector but never did. Ultimately, the bgfsck is too expensive and people did not like the uncertainty of not having run fsck. To resolve these issues, I have added a small journal to softupdates. However, I only have to journal block allocation and free, and inode link count changes since softdep guarantees the rest. My journal records are each only 32bytes which is incredibly compact compared to any other journaling solution. We still get the great concurrency and ability to ignore writes which have been canceled by new operations. But now we have recovery time that is around 2 seconds per megabyte of journal in-use. That’s 32,768 blocks allocated, files created, links added, etc. per megabyte of journal.

Comments are closed.