After 20 something years of living a diverse educational and professional life motivated by healthy lifespan extension and dominated by science and technology I have reached the riskiest and most rewarding phase: launching a business around it and making it real. I have founded a startup, called AgeCurve Limited and we are already offering a product,… Continue reading AgeCurve Limited and Gen P: my business angle on aging and longevity
Last week I’ve participated in a one day Apache Spark workshop in London developed by Databricks and organised by Big Data Partnership. Databricks Training Resources is the most important link you need to know in order to get started, contains the whole training material. Let me share some short comments: Spark is the next, logical generalised step leveraging the… Continue reading 1 day Apache Spark training: randomish insights
I became quite obsessed with Markov chain Monte Carlo Methods lately. It is said that MCMC methods form the most frequently used class of algorithms in computer science. However when I was searching for a comprehensive list of MCMC applications across different domains to my surprise I have found none. So I’d like to ask… Continue reading Big list of Markov chain Monte Carlo (MCMC) applications
A snippet from the following interview: The Regeneration Generation: A Conversation With Bob Hariri, Vice-Chairman and Co-Founder of Human Longevity Inc.
Earlier this year (February-April) I ran 9 short 1 hour hands-on sessions (5 persons/session) called Hadoop 101 for bioinformaticians at the Genome Campus for European Bioinformatics Institute and Sanger Institute people. The participants were bioinformaticians, developers and sysadmins. My idea was to start with a ~20 minutes long theoretical introduction so it provides some handles on whether… Continue reading Hadoop 101 for bioinformaticians: 1 hour crash course, code and slides
Larry Page acknowledges in a recent interview that the Google’s mission statement is outdated and became irritatingly narrow:
MCMC methods guarantee an accurate enough result (say parameter estimation for a phylogenetic tree). But they give it to you usually in the long-run and many burn-in steps might be necessary before performing ok. And if the data size grows larger, the number of operations to draw a sample grows larger too (N -> O(N)… Continue reading Pleasingly Parallel MCMC: cracked wide open for MapReduce and Hadoop