I see a lot of people still speculating on Bitcoin using “technical
indicators” and talking about “support levels” and throwing around charts based
on things like the “Elliott Wave analysis”. This just doesn’t make sense to me… I don’t believe
you can’t analyze Bitcoin like equity or a fiat currency (yet). We are way too
early to use such an analysis. Bitcoin’s
maturity is like gold during Emperor Augustus’s reign (30 B.C.) where he set
the price of gold at 45 coins to the pound.
Bitcoin today is essentially using 1 token for both
rewarding the supply side and the demand side (to consume the service). For a
crypto economy (the network) to work it may ultimately require 2 tokens (an
“asset token” for the supply side and a “payment token” for the demand side)
–much like there is in today’s fiat economy.
Bitcoin’s 1 token approach, in theory, is great because for the supply side to provide its service it has to take payment in a combined token, so it has to be in the user’s hands for them to consume the service, so this creates the pressure to sell the combined token as the network grows. Everyone in a crypto network gets to participate from the value created instead of just one segment. The net result is that a user can’t be a passive accumulator of capital, they must be active for the economy to work. But unfortunately making users investors is asking them to understand the infrastructure. Theoretically, it would be great if users had skin in the game because they would have an emotional connection to the infrastructure–But that’s not realistic or scalable. Power users can participate, but it’s a choice.
In the real world, the users of a system don’t want to take
any risk. For example, when you get into
a taxi in NYC you don’t have to worry if Uber is going to have any impact on
the value of your taxi medallion. Humans
don’t want to think about the risk of ordering pizza with tokens that may
eventually be worth millions of dollars.
It’s just simpler for human nature. The 2 tokens must be separate
because the transaction velocity of capital is very low, and the transaction
velocity of a currency is very high so as an economy grows exchanging gold or
land (a barter system) is just not going to work. So, we need currency
(separate from capital) to accelerate economic growth. Reminder: Capital
appreciates with economic growth (it’s scarcer) and currency depreciates with
economic growth when we print more of it to keep up with inflation to keep
Unfortunately, without good governance, this leads to an economy where the people who acquired the asset tokens early become increasingly concentrated over time as the economy grows. Governance is how do we manage, control and manipulate the data to find a single source of truth.
So, it’s early. Real
early. To determine value this early you
need to keep an eye on the layers and how capital/currency, supply/demand are
interacting and most importantly how governance is evolving between layers (one
example, how to fund the development of the base chain) across each base layer
currency. …And in the end, the base
layer (the store of value) will become a commodity and a lot of the value moves
up the crypto stack (so why waste any time doing old school chart analysis on a
layer that will eventually become a commodity—that is if it is successful at
all—and given the power of Central Banks that may also be questionable…).
“Governance comprises all of the processes of governing – whether undertaken by the government of a state, by a market or by a network – over a social system (family, tribe, formal or informal organization, a territory or across territories) and whether through the laws, norms, power or language of an organized society. ….” – WikiPedia.com
Technology changes everything… and you can’t stop it.
Technology is mostly good yet sometimes bad but you can’t stop progress. Technology enables better health outcomes, it allows people to be more informed and better educated, it allows for easier and more affordable communication, it allows for automation and productivity increases… (the list is long) but just as important technology can produce negative unexpected outcomes.
Technology’s unexpected negative outcomes in the past had less potential to harm society than what is upon us today and in the near future. In the past technology disintermediated companies… In the future, technology has the opportunity to disintermediate society-hence, GOVERNANCE IS THE FUTURE <<–hint, hint, entrepreneurs there is a business opportunity here.
Governance – The Past
Up to today, governance frameworks that have had an impact on how technology is used have mostly been left up to industry groups with some government oversight and mostly due to data security concerns (PCI-DSS, HIPAA, NERC, FISMA etc..) but now countries are mandating good governance as well (GDPR being one example) primarily due to privacy. Farther down in the technology stack there are many governance organization – you can find that lineage here for the Internet as an example. However, there is little from these past frameworks that can prepare us for the future and unfortunately, our governing body (US Congress) is ill-prepared for the job but I do think we will be OK because of the governance vacuum will create an opportunity for entrepreneurs to fill.
Something Fundamentally Changed – Every Company Is Now A Technology Company
Up to now, technology has been thought of as an enabler of productivity… a driver of capitalism… but unfortunately, many companies still believe their IT team is responsible for technology. Many still believe their executives need to understand technology but it’s their IT teams job to enable it and Securities job to lock it down… they still believe their exec’s jobs are revenue & profit (selling more widgets, increasing the margin of the widgets they sell and finding new widgets to sell) … but something fundamentally changed–Every company is now a tech company (here) (here) (here) – Disintermediation due to technology hasn’t (and will not) stop. The power of data required good CEO’s to turn their companies into technology companies–but ready for this: now you may become irrelevant because of what you do… or how you are structured to perform what you do…
Learning/Robotics is here now and will drive a wrecking ball through some major
job categories. Blockchain-Cryptocurrency economics’ will change how companies
are built and how value moves between entities. CRISPR-CAS9 will produce
incredible healthcare outcomes and quantum computing presents unimagined
breakthroughs. These technologies are
more powerful than anything society has ever witnessed.
How Artificial Intelligence will change the world (here).
How Blockchain-Cryptocurrency will change the world (here). Corp structure (here).
Governance – The Future
“The real problem of humanity: we have paleolithic emotions; medieval institutions; and god-like technology… and it is terrifically dangerous, and it is now approaching a point of crisis overall.” – E. O. Wilson
The governance of Artificial Intelligence has been more challenging as there is not a great framework to call upon. (here) (here). AI as it relates to justice, data quality and autonomy involve identifying answers to questions surrounding the safety of AI, what legal and institutional structures need to be involved, control and access to personal data, and what role moral and ethical intuitions play when interacting with AI. When a citizen’s life can be shaped by algorithms who is in control of monitoring those that created the algorithms and the outcomes of such algorithms? In the past, we’ve seen machine learning racially profile bias, unfairly deny individuals for loans, and incorrectly identify basic information about users. The development of AI governance will help determine how best to handle scenarios where AI-based decisions are unjust or contradict human rights.
‘The problem is that currency and capital respond differently to economic growth. Capital appreciates with economic growth and currency depreciates with economic growth as an economy grows. Inflation is commonly thought of as the printing of new money but really it increases in prices over time as a result of economic growth. Capital as an asset type appreciates as the economy grows and is more scarce than currency. We print more currency to keep up with inflation to ensure stable prices hence it depreciates with economic growth over time. What happens over time is that capital becomes more concentrated and we have a lot of people living their lives in currency and only a few living their lives in capital. In crypto if we combine these into a single asset (Bitcoin) we don’t get the same income inequality or wealth inequality however by separating the access token (work token) from the currency token we risk the people who acquired the asset tokens early on may become concentrated over time as the economy grows.‘ – great A16z podcast on the subject
I read this @USAtoday story and had to laugh “The House Democratic Policy and Communications Committee is hosting a session Thursday morning with Ocasio-Cortez of New York (@AOC – 2.42 million followers) and Rep. Jim Himes of Connecticut (@jahimes – 76,500 followers) ‘on the most effective ways to engage constituents on Twitter and the importance of digital storytelling.’”
I’m center-right politically, and I don’t agree with some of @AOC’s views on issues, however, I do have a great deal of respect for her leadership abilities.
As a side note, I don’t disagree with what Himes is quoted as saying, “The older generation of members and senators is pretty clueless on the social media platforms”. Just review the senate’s embarrassing questions at the Zuckerberg hearing and you will see plenty of “clueless” senators … However, Congress is totally missing the point in regard to @AOC’s momentum and it will bite each of them during their next election cycle if they don’t wake up—The point is @AOC is showing leadership—watch and learn!
Congress–your constituents are people… not objects!
@AOC is listening and talking to people—Twitter is just one of her preferred communication tools. Here is the point–Many of today’s politicians tend to treat constituents as ‘objects‘ versus ‘people’ with hopes, dreams, and pains. Remember back when Bill Clinton engaged with a person directly at a 1992 town hall meeting—he talked with people. @AOC resonates because she is having a conversation with ‘people’.
Members of Congress–stop labeling people (i.e. ‘deplorables’, ‘Trump’s base’, ‘the Democrats’, ‘the Republicans’, ‘Men’, ‘Women’, ‘Black’, ‘White’, ‘Hispanic’, ‘LGBTQ’ etc..), fight for what you believe is right for your ‘people’.
Congress—we don’t want ‘managers’ we
‘Management’ is about systems, processes, policies, and resources (what all those federally appointed officials manage daily…) but ‘leadership’ is about vision, inspiration, values, and people. Leaders deal with management short-falls– Basically, leadership is required when the systems and process do not work…. Leadership is required when the policies are not applicable or do not exist… Leadership is required when there are not enough resources to accomplish the task…
In 2019, being an effective member of Congress requires you to have an open dialog with people, take a stand on issues you believe in, simplify complicated subjects and educate others, build consensus regardless of party and admit when you are wrong (and don’t take credit when you are right).
Regardless if you agree with @AOC or not… learn from her because she is showing you what we expect from members of Congress in 2019.
There is going to be a lot of debate over the next 2 years about healthcare and moving to a single-payer system. Democrats are talking about “Socialized Medicine”… Republicans are talking about “Free Market Healthcare”–as an example, here is a good article that articulates a good one-sided argument without many deep recommendations. …But what are the facts?
I’m building the following notes to start capturing data to help me formulate my opinion on the subject. I’ll use a “5 Whys” strategy to think through the issues.
5 Whys is an iterative interrogative technique used to explore the cause-and-effect relationships underlying a particular problem. The primary goal of the technique is to determine the root cause of a problem by repeating the question “Why?”. Each answer forms the basis of the next question. The “5” in the name derives from an anecdotal observation on the number of iterations needed to resolve the problem.
Let’s review the
The United States has the poorest population health outcomes. For example, we had the lowest life expectancy (78.8 years compared with a mean of 81.7 years).
If you compare the U.S. to the top 11 other countries in the world we are out of balance:
We know a “free market” healthcare system won’t work for a simple economic reason: Healthcare demand will always outstrip supply and this imbalance will always create a wide (and ever-widening) economic gulf. However, we don’t have a “free market” system today—we have a mix… We have a Medicare, Medicaid, VA, Fed / DoD, and Indian Health Services System… we have the so-called “free market” system primarily sponsored by employers and then we have the 10s of millions of un (or under) insured citizens…
So, what are our goals:
Wait! What about being the leading innovator in healthcare? What about driving more of the conversation away from “sick care” and really toward “health care” – that’s going to require entrepreneurism and a free market to innovate and capitalism to support… The reality is that this isn’t an easy fix… it’s quite complicated and anyone involved in the argument needs to know the details.
So what are the “5 Whys” of the healthcare debate?
The first two “Whys” are easy…
We know that the United States spend 17.8% of GDP ($9,403 per person) on healthcare when Canada, Germany, Australia, the U.K, Japan, Sweden, France, the Netherlands, Switzerland, and Denmark spend 11.5% and all have better overall outcomes (i.e. life expectancy as an example). Why?
We know from the JAMA study mentioned above the high U.S. spend is because Physicians earn more in the U.S., Administrative costs are higher in the U.S., and general prices for pharmaceuticals, procedures, and tests (example: MRI) are higher in the U.S.. Why?
Here is where it gets complicated… We need to dig into each of these:
The number of slots supported by Medicare (who pays for most residency slots) has been frozen for two decades after Congress lowered it in 1997 at the request of the American Medical Association and other doctors’ organizations.
What could policymakers do?
Fund more residency slots.
Allow Medicare to limit the slots for certain areas of specialization to control supply and demand.
End the requirement mandating that foreign doctors complete a U.S. residency program and allow them to complete an equivalent residency program in another country or allow foreign-trained doctors to practice under the supervision of a U.S.-trained doctor.
Allow nurse practitioners to perform more procedures that they are qualified to complete.
The reliance on multiple payers (Medicare, Medicaid, and many private insurers, all who each have their own set of procedures and forms for billing and collecting payment) drives up the costs. The American health system offers a lot of choice among health plans. This all causes physicians to spend on average 3 hours per week addressing billing-related matters, medical support workers spent an additional 19 hours per week on billing-related matters, and administrators spent a total of 36 hours per week on billing and collection matters. Why?
We are only at the beginning of creating interoperability and data standards for healthcare. There is a great deal that has been done and a lot on the table. It’s a very complicated issue but well understood. More here, here, here, here and here.
What could policymakers do?
Legislate strict electronic data standards (provider example) for interoperability and transparency.
Legislate standard electronic billing and collection policies.
General prices for pharmaceuticals, procedures & tests (example: MRI) are higher in the United States? Why?
Other countries negotiate with the providers and set rates that are much lower. In Canada and Britain, prices are set by the government and in Germany and Japan providers and insurers come to an agreement or the government steps in. However, in the United States health-care providers have considerable power to set prices, and so they set them high. Why?
In the U.S., health care delivery and payment are fragmented, with numerous, separate negotiations between drug manufacturers and payers and complex arrangements for various federal and state health programs (more). And, in general, the U.S. allows wider latitude for monopoly pricing of brand-name drugs than other countries are willing to accept. Why?
Two of the most profitable (and powerful) industries in the United States are the pharmaceuticals and medical device industries. (It is, however, true that Medicare and Medicaid negotiate prices on behalf of their members and purchase care at a substantial markdown from the commercial average prices.). These powerful industries have pushed back on government policymakers why try to legislate setting overall spending levels for payments to providers & drug makers because it would impair their revenue and profit growth.
Other countries may also have policies that result in new drugs and medical technologies being adopted more gradually. (more)
Other countries have more friendly legal environments to challenge the validity of patents. Why is the US different?
This article shows how numerous patent filings (“patent thickets”) are used to drive up the costs of Humira by AbbVie Inc. in the US as an example.
Create legislation to counteract patent thicket practices (note: Sen. Susan Collins of Maine has called for ways to counter such practices but there has been no policy put forward).
“How many businesses do you know that want to cut their revenue in half? That’s why the healthcare system won’t change the healthcare system.” Rick Scott – Senator from Florida
Let the federal government negotiate lower drug prices for Medicare beneficiaries. This would shift the U.S. policy toward a more centralized pricing system like that used in other high-income countries. Currently, the Veterans Health Administration and the Department of Defense are the only federal entities allowed to effectively negotiate directly with drug manufacturers; they pay prices that are roughly half of those paid at retail pharmacies. (more, more) RISK: Too much legislation may make our pharmaceutical sector less attractive to investments resulting in less innovative and effective drugs in the future.
This is a work in progress so I will add more as I research and learn.
Health care is a misnomer for our medical system–It should be called sick care. Doctors mostly make their money when we are sick. What if doctors really could prevent disease? —well they can, but you need to be prepared to do the work because disease prevention is about:
Lifestyle (what you eat, your weight and how much you exercise—covered here)
Keeping great medical records (don’t get me started on a. doctors keeping paper files, b. doctors making it difficult to get your medical records (push them) and c. electronic medical record systems having different formats for the data (more))
Documenting and understanding your genome (DNA)
This set of notes will dig into “4” – Your genome! My hope is to explain this subject in a way where you can understand how to get your genome data, view it at a high level, view the details and begin to understand the interworking’s of your genetic makeup so you understand the value of leaving your ‘sick care’ doctor behind and finding a true personalized ‘health care’ MD).
Step 1: Have your genome mapped
There are many low-cost direct-to-consumer DNA mapping sites and this linked article will explain a few options for you to consider (here is another). I personally like 23andMe ($199 USD) because it does a great job of explaining DNA to a novice and a professional, they seek FDA approval, and the site allows you to download your data.
Let’s first cover a few standard definitions to make sure we
are all on the same page:
DNA (deoxyribonucleic acid) – A
molecule composed of two chains that coil around each other to form a double
helix carrying the genetic instructions used in the growth, development,
functioning, and reproduction of all known living organisms and many viruses.
Chromosome – a DNA molecule with part
or all the genetic material (genome) of an organism. Human cells have 23 pairs
of chromosomes (22 pairs of autosomes and one pair of sex chromosomes), giving
a total of 46.
Genes – From 23andMe, “Genes are
segments of DNA that tell your body how to function and what traits to express.
People have about 22,000 genes in their genome. Most of these come in duplicate
– one copy from your mother and one from your father. Everyone has the same set
of genes, but each one can vary by a few letters (bases) between people. These
“variants” can lead to differences in the way you look, how you
respond to stimuli, and whether or not you are predisposed to certain diseases.”
Once you get your data back from one of these direct-to-consumer
genome mapping sites you will have access to their portal. I’m going to use 23andMe as the example, but
many are similar. When you get your report, you can easily go to the ‘Health’
section and see what it is reporting. It will look something like the
2: Download your raw genome data to a safe password protected and encrypted
If you are using 23andMe you can download your raw data
instructions. If you know what you
are looking for you can also dig into your raw data here (more on this later).
But what can you do with your raw genome data?
WARNING: This is where things get a bit tricky. There
are five very important things to know:
Some sites (like Promethease) list all the SNP markers (From 23andMe, “A marker is a specific location in the genome where a genetic sequence has been shown to vary between people. Markers are denoted by a unique identifier, most often an “rs number”) associated with different traits and diseases, as curated from SNPedia. Drawing any conclusion from this reporting is often frowned upon by geneticists. There is such a thing as an SNP that is strongly associated with a disease (These are typically the ones 23andme has FDA approval to report—example BRCA1/2 The individual gene mutations BRCA1 increases the risk of breast cancer. Angelina Jolie is just one of the thousands of women who chose bilateral prophylactic mastectomy to mitigate the increased risk of the BRCA1 mutation.) but most common diseases are not really affected by any given SNP.
The best analysis uses the compound effect of many SNPs with an understanding that each only contributes a small effect. This concept is called polygenic risk scoring (PRS). This allows scientists to take anyone’s genome and calculate your aggregate risk for certain diseases even if you don’t have one of the known major mutations. Polygenic Risk Scoring is the total score of all the minor gene variations that increase disease risk. This is a powerful upgrade to your doctor’s ability to predict disease in any given patient. This means doctors are no longer in the dark with only the family history to guide them. (here, here, here and here are 4 great articles on PRS)
Be careful of companies target marketing supplements or programs at gene variants –always check with a licensed medical doctor (MD) before taking any actions.
Step 3: Mapping your raw data to the SNPedia database (but heed warning #2 above)
I am going to
use the Promethease site. A report is $12, and it can directly connect
your 23andMe DNA data with the SNPedia human
genetics wiki. It also provides information on the effects of genetic variants
on Phenotypes (the
composite of the organism’s observable characteristics or traits, including its
physical form and structure; its developmental processes; its biochemical and
physiological properties; its behavior, and the products of behavior, for
example, a bird’s nest. An organism’s phenotype results from two basic factors:
the expression of an organism’s genetic code, and the influence of
environmental factors.) and the information is sourced from peer-reviewed
scientific publications. Keep in mind that the match against the SNPedia
database may be wrong, as the raw data is not held to the same quality level as
that which is part of an FDA approved report from 23andMe.
The report only takes 5 to 10 minutes to generate and you will get it via email as a zip file and via their website. It will look like the figure below where you have a search panel on the right and the data on the left. In the example below, you can see the SNP (Single Nucleotide Polymorphism) marker is rs1333049 (From 23andMe, “A marker (SNP) is a specific location in the genome where a genetic sequence has been shown to vary between people. Markers are denoted by a unique identifier, most often an “rs number”, or “rsid”.”). You will also see the Position (From 23andMe, “If you stretched out all of the DNA in a chromosome from end to end, you could count the position of each letter (A,C,T,G) relative to the first one in the sequence. This count is referred to as a genome coordinate or position. 23andMe uses the same coordinates as the National Center for Biotechnology Information (NCBI), build 37.”). You will also see the Magnitude (From SNPedia.com, “Magnitude is a subjective measure of interest varying from 0 to 10. Over time it should be adjusted up or down by the community.” The range is from 0 (you have the common genotype) to 10 (significant information).) You probably only want to review magnitude 3 and above.
If you click on the SNP marker hyperlink rs1333049 you will be
taken to the details page in the WIKI.
From the page above on the far right, you have links to many great sites including Ensembl and 23andMe’s detail pages.
Once on the 23andMe page you can also see the Variant (From 23andMe, “At any position in the genome that varies, there is more than one possible version (or variant) of the DNA sequence. For example, some people might have an A at a certain position, whereas other people might have a T.” Genetic variations, or variants, are the differences that make each person’s genome unique. DNA sequencing identifies an individual’s variants by comparing the DNA sequence of an individual to the DNA sequence of a reference genome maintained by the Genome Reference Consortium (GRC).) and Your genotype at a marker (From 23andMe, “Your genotype at a marker is the combination of variants that you have at that position on both chromosomes’ copies. For example, if you have the A on one chromosome copy and a T on the other one, your genotype is AT. Some chromosomes don’t come in pairs (i.e. the mitochondrial chromosome and, for the most part, the X and Y chromosomes in men), so your genotype can sometimes be a single letter.”)
There are several other tools out there to get information on each one of the SNP markers. One of the best is found here at NIH.gov. With this, you can search for many research articles per SNP marker.
Now that you have all that data, please reread Warning #2 above!
Step 4: Map your data to known polygenic algorithms
These sites are reported to be working with polygenic risk scores:
Keep in mind that this is a relatively new science that has been enabled by the mapping of the human genome. The research is coming out fast. As an example, Sekar Kathiresan and his colleagues at Harvard University and the Broad Institute have been focused on variations linked to coronary artery disease, atrial fibrillation (an irregular heart rate), type 2 diabetes, inflammatory bowel disease, and breast cancer. They developed an algorithm that could use all this information on a disease’s genetic variants to produce a polygenic risk score, a single number that would indicate a person’s risk of developing each disease based on their genomic data. Their algorithm identified 20 times more people at high risk of a heart attack than did the traditional method of just looking for the variant that indicates inherited high cholesterol. If more people know they’re at risk, they can go on medication or start making lifestyle changes to prevent the onset of the disease. You can get a copy of the report here or here.
As an example, here is data from Impute.me a non-profit (please donate) genetics analysis site run by independent academics since August 2015. Their design goal is to provide analysis at the cutting edge of what is currently known and possible in genetics research. A central part of their site is the creation of a guidebook for personal genome analysis. This book provides more in-depth explanations for many of the concepts involved and it’s highly recommended as a guide to accompany your analysis. (New: Updates to the site will be announced at twitter).
Let’s go into a couple interesting things
you can do with their site. Note that I am using the text below
directly from the Input.me website.
A polygenic risk score is a value that gives a summary of a large number of different SNPs – each of which contributes a little to disease risk. The higher the value, the higher the risk of developing the disease. Of course, the interpretation of this risk depends a lot on other factors as well: How heritable the disease is. How much of this heritability we can explain with known SNPs. And not least, what would the risk of disease be for you otherwise, i.e. without taking the genetic component into account. Because the polygenic risk score is only a risk-modifier, knowledge of these three other values are all required if you want to know your overall risk is, i.e. what’s the chance in percent. This calculator cannot provide that. But it can provide a view of the known genetic component of your disease risk, based on all the SNPs that we know are associated with the disease. This, we believe, makes it a better choice for complex diseases than the typical one-SNP-at-the time analysis typically seen in consumer genetics.
If you upload your 23andMe data after a couple of days you will have access to this site and a unique ID that will be good for 2 weeks.
This is a module that
can visualize the entire compendium of human disease – at each point showing
relevant genetic findings. The goal is to illustrate how to present genetic
data depending on a medical status.
Diseases, where one mutation has a strong medical effect on you are luckily rare. For the majority of people, learning from our genes is instead matter risk modifications and weak predictions. For a healthy adult, these are typically of little practical use. However, the assumption changes drastically if you are not healthy; If you are anyway being evaluated for a given disease, it may very well be useful to know if a different but medically related diagnosis has a particularly high or low risk.
For example, if a person is suffering from mental problems, but have not yet been properly evaluated for any specific diagnosis, then genetic risk information for all diseases related to mental problems may become useful knowledge. Because the information can then serve as a guiding point in that difficult challenge of first diagnosis. Similar examples can be made for virtually all areas of early medical evaluation.
It is the purpose of the module to help with this: By forcing browsing into pre-defined sets of disease-areas, the algorithm provides you only with genetic information that is relevant to.your current medical status. Nothing more, nothing less. Risk scores relevant to the medical area you are interested in will be shown. Fluke signals from irrelevant disorders will not. The details behind all information given here can be explored in the remaining modules of the site, as indicated when you click on each of colored bubbles above. As such this module can serve as an entry-way into the entire site, depending on your context and interest.
In the root of the tree, we find ‘feeling fine’, which is always a neutral color: People who feel fine don’t need to worry about their genetic risk scores. However, when selecting ‘heading to hospital’, climbing up the tree, the genetic risk scores are revealed as they become relevant. More of the thinking behind this module is explained in this short animation-video from 2017.
The overview of rare disease variants found in this module is not the most extensive single-SNP effects available online. They are shown here because they are all well-supported strong genetic effects, for a selection of rare inherited diseases where microarray analysis made sense. This was the reason these SNPs were included in the 2016-version of the 23andme health.
Especially the last part – that microarray analysis made sense – is very important when analyzing the genetics of rare disease; the microarray technology used in consumer genetics is not optimal because the really strong mutations typically are not measured on a microarray. DNA-sequencing is required to detect them. Therefore microarray analysis of rare disease effects has many problems with false negative results. There’s a lot of further details to this discussion, chapter 3.5 in this book is a good place to seek more information.
Nonetheless, the 2016-selection of microarray-measurable SNPs made by 23andme still is reasonably relevant to report, particularly for the carrier-information. For non-23andme users, this module has the additional benefit of translating the data for proprietary 23andme SNPs, with the caveat that because the SNPs are very rare they are often hard to impute.
This is a test of a systematic approach to drug-response SNPs. Most of the known drug-response-associated genetics concern liver enzymes (e.g. CYP2C19) and their break-down of drug metabolites. These are well characterized elsewhere already. The focus of this module is to integrate systematic multi-SNP profiles beyond liver enzymes and provide estimates of drug-response.
To illustrate how this works, the module shows the calculations that take place for a number of drug response predictions, both on a per-drug level and on a per-SNP level, corresponding to the first and the second table. The first table summarizes per-drug calculation whenever possible. If possible, a Z-score is calculated in the same way as also described in the complex disease module. If not, it is indicated as ‘not calculated’. In that case, it is necessary to look at the second table for comments on the individual SNPs from the input studies. The Z-score approach takes information from many SNPs, and can, therefore, be considered as more thorough, of course depending on the underlying scientific study.
Most SNPs in the genome are not actually found within a gene: They are ‘intergenic’. When talking about a gene-mutation however, as is done in popular media, most often the meaning is a SNP that alters the sequence of a gene. Because of selection pressure throughout our evolution, these are rare. Also, they are often the focus of scientific studies using DNA-sequencing technology to discover the causes of rare diseases. However, interestingly many of us actually have these ‘gene-breaking’ SNPs while nonetheless being perfectly healthy. The imputation technology used with this site gives the opportunity to identify a number of these based on just on genotyping microarray results. If you give your ID-code to this module a table of all measured missense and nonsense mutations will be presented.
Interpretation of the table can be done in many ways and unlike other modules, this does not give ‘one true answer’. One method is to search for SNPs where you have one or two copies of the non-common allele and then investigate the consequence using other resources such as dbSnp or ExAC. Note however that the definition of ‘common’ is very dependent on ethnicity: in this browser common just means the allele most often found in impute.me-users. However, it is recommended to check the ethical distribution in e.g. the 1000 genomes browser. Another help provided is the polyphen and SIFT-scores, which can give an indication of the consequence. Ultimately the goal of this is to satisfy one’s curiosity about the state of your functional genes. If you happen to find out that you carry two copies of completely deleterious mutations (nonsense mutation) but otherwise feel healthy, feel free to contact us. By being healthy, in spite of a specific broken gene, you’d be contributing to complete our view of genes and how they work.
Thousands of mutations in the BRCA1 and BRCA2 genes have been documented. 23andMe reports data for three mutations that account much of inherited breast cancer, but other possible mutations in these two genes are not included in the 23andme report. Many can only be detected by sequencing, such as from myriad genetics. However, dozens of extra possible mutations of interest can be reached with imputation analysis. The following lists your genotype for the directly measured three 23andme-SNPs as well as all other SNPs in the two genes that are either missense or nonsense. For interpretation, we recommend reading more about polyphen, sift-scores, and clinvar.
If clinvar is indicated as pathogenic and the SNP is measured in your genome and your genotype is not of the genotype indicates as normal, then this indicates a potential problem. The list is sorted according to the clinvar variable by default.
UK Bio-bank Calculator
A study of ~½ million UK residents, known as the UK biobank, has recently been published. This module allows the calculation of a genetic risk score for any of the published traits.
Now that you have all that data please reread Warning #4 above.
Step 5. Make a plan.
If you are high risk for coronary artery disease see a cardiologist. If you are at high risk for breast cancer, mental illness, eye problems etc. see a medical (MD) specialist.
…but be wary of Warning #5 above–don’t go see a “quack” and don’t self medicate!
..but also do your homework and understand if the specialist is up to date on the latest and greatest –for example, if you see a psychiatrist for ADHD make sure they are trained in epigenetics.
Over the past couple of years, I’ve been called in by executives, bankers, and investors to determine if an entity (company, university or government) should spin out some technology into a new business. Some have already made up their minds and others are in the fact-finding stage. This is the process I use to suss it all out…
The first thing you must do isfigure out if the entity has given this spinout a lot of thought or is this just an idea? To find out you must get their answers to 11 questions. You are trying to figure out the level of depth in their thought process—is it cursory or at a grand level of depth?
The Problem: What is the business problem that a NewCo would use this IP to solve?
The Solution: How does the IP solve the problem?
The Value: How much impact would the solution have on a company’s cost savings or increased productivity or increased security etc.
The Market Size: How big is the market for the solution?
The Timing: Why is now a good time for this type of solution?
The Proof: What proof/milestones have been achieved that prove there is a market for the solution?
The Plan: What does the entity see as NewCo’s strategy for taking the solution to market?
The Competition: Who else is out there selling a similar solution? How well are they doing? What does this IP have that others are missing?
The Obstacles: What obstacles would be in the way of NewCo’s success?
The Team: What team members would go with the IP (if any at all)?
The Financial Model: How much cost would it take to build a NewCo and how much value could it return over what time frame?
If the entity is experienced and has sold or licensed IP in the past, or has funded companies as subsidiaries or new entities previously, they likely have thought through these 11 items in quite a bit of detail and understand the value of their asset and who in the market is the likely ‘acquirers of’ / ’investors in’ such an asset. If not, then there is market-analysis work to do first to determine if this exercise is an event worth the energy.
The second thing you must do is understand ‘why’–the Motivation. Why would the entity spinout IP versus capitalizing on its value? It’s easy to understand why a university or government would spin tech out (they can’t capitalize on it easily) but for a corporation to do it, there must be a good reason.
Is it easier and cheaper for R&D to be done outside the company?
Is it a deviation from corporate priorities (example: we do R&D for the government)?
It’s used internally with great value, but the tech would have even more value to the entity if thousands of other companies were using it as well.
It’s important to understand the underlying motivations so you can begin to understand the dynamics of the situation.
The third item to figure out is if the entity has an expectation of value? Some entities have divested of a lot of IP in the past and for others, this may be the first time. You must figure out if the entity’s expectations are reasonable.
Last, does the entity have a business model in mind? Do they just want to sell the IP to a buyer, or do they want to fund a startup? …or is the answer somewhere in between (license the technology to NewCo for a % of sales as one example). You must figure out if they want a short or long-term return, and do they understand the ramifications (additional investment may be required; legal fees etc.).
Let’s go a little deeper into item 1 above as it will begin to shape the entire conversation with the entity.
What category does this IP fit in, regarding the tech ecosystem? A great place to start is Crunchbase.
As an example, let’s say that the IP in question helps the entity’s corporate employees communicate better on tasks, in and between, corporate silos. You dig into CrunchBase and find 475 companies competing in the ‘Task Management’ category and another 2,400 competing in the ‘Collaboration’ category and 108 companies with both. Obviously, there are many solutions in each of these sub-categories solving different aspects of collaboration for different audiences and there is a lot of overlap, so the sub-categories are difficult to navigate but you can ascertain there is a great deal of competition in each of the collaboration submarkets.
Are there market leaders leveraging similar IP? One easy way to tell is if Gartner, Forrester, IDC or another large trusted enterprise research organizations has created a market guide.
If we take our task management example a bit further, we might see a few big horizontal areas where the tech is applicable (possibly Project Management, CRM and IT Service Management) and who some of the dominant players in the market are today. Gartner does a nice job of showing this in something they call a magic quadrant.
see that the market (at least in the past) really splits between two areas:
Externally facing: Customer Relationship Management (CRM) – Salesforce.com is the leader
Internally facing: IT Services Management (ITSM), PM, HR, Product Development – ServiceNow, Planview, and Atlassian are market leaders
You can also build some market reference points on these leaders. For example, ServiceNow finished 2018 with about $2.8 billion in annual revenue (and Q2 revenue was up 45%) and Salesforce increased its market share in 2017 by more percentage points than the rest of the top twenty CRM vendors combined.
If enterprise solutions are being sold with similar IP you can generally get access to market size data that the Enterprise Research Analysts have published—it might cost you a Gartner subscription but there is great value in the research.
Who are the new
market challengers that are doing something with similar IP? Every space is
being disintermediated in some way, shape or form so you need to understand how
the market where your IP fits is changing and how much is being invested in the
With our example, you will see that companies like Asana and Trello are having a major impact on the future of the space. You can also see that a great deal of funding is going into these new platforms from well-known venture capital companies. You can find all this data at no-cost in Crunchbase.
Looking at both market leaders and challengers will help you
start to understand the ‘problems’
being solved, the ‘solutions’
being used/sold, the ‘value’ being
created, and the ‘pricing’
being used by similar IP in the market.
Are there changes on the horizon for the future of the markets where this IP fits? Once you dig into the enterprise plays and compare the market-challengers you will soon see if any patterns emerge. This could start to provide you a view into ‘timing’ and market adoption and will allow you to determine if your IP is too old or is it perfectly timed for where the market is going to be in a few years. This will also help you to start formulating some of the obstacles with the IP.
If we continue with our same task management example, you will see that the market is moving toward platforms that enable many solutions that are tightly integrated. The market has coined the term Platform-as-a-Serve (PaaS) for the shift. Gartner even has a magic quadrant referencing the platform enablers for the shift.
Timing. As a new company, if you are late to the market, you will likely lose. Only fast followers that differentiate primarily based on a superior cost structure due to scale can win as a follower. If you follow Pete Flint’s NfX model for startup timing “it’s all about who enters the market closest to the critical mass point. It’s at this point when technology, economic and cultural forces can combine to enable explosive growth”.
With our task management example, given the onset of FaaS and current enterprise companies already fully entrenched in PaaS the IP would have to already be written with FaaS in mind to have a chance at competing with the momentum of the challengers or the entrenchment of the enterprise PaaS solution….or the idea would have to be so unique that a rewrite of the code would be worth the effort/cost.
Proof. You must be able to prove there is value for
this IP in the market and to the best extent possible minimize risks. Simple
questions that need to be answered are as follows:
Who is using the IP today? Is there or can there be a detailed case study written? Can the value be quantified with the current use case? Have others seen, requested or been given access to the IP and what is their feedback?
How much has been invested to build the current IP (in dollars and time)? How big was the team? What are the skills of each of the people on the team? When was the IP last updated? How well is the IP documented? Are there patents from the company or others that infringe on the IP (you must do a patent review prior to next steps—do a preliminary review here).
Is the current IP part of a Continuous Delivery process and is it well documented?
What languages were used to build the IP? What open-source libraries? What are the licenses of each of those libraries? Has an inventory of the code been documented by a group such as Black Duck? Are there any proprietary libraries being leveraged that lock the IP into a certain platform (such as AWS)?
With our task management example, the code is being used by the entity so there is a great deal of information that can be gathered and used. The code also would come with its first customer (the entity). If the entity isn’t willing to continue to be the first customer then this may be a warning sign and should be discussed.
Plan. What strategies are potentially applicable to build a company around the IP? An easy way to start digging into opportunities for different strategies is to leverage the “Blue Ocean Strategy” and to dig into different uncontested markets for the IP. Are there areas in the market that are uncontested, and the IP is a perfect fit?
You also have to know how the IP lines up
with a Lean StartupMinimum Viable
Product (MVP) for the uncontested market and how long it will be
until the IP is read to put in
front of customers to test assumptions. This
is critical for early decision making.
With our previous example, it’s difficult to find uncontested space given how big the entrenched players are and how much energy (VC $) is going into the hundreds of challengers. There may be a play for AI, IoT or Consumers—and then again, there may be an uncontested space that’s not well understood still available.
Team. How big was the team that built the IP? What are the skills
of each of the people on the team? Where are they located? How are they
compensated? What are they doing now? Are any of the people that built the IP
willing to move with the IP?
Financial Model. Given most IP today is delivered via the Cloud it would be good to have some idea of the cost of building such a company. The spreadsheet outlined in this post by Gary Gaspar (and in the comments) is good enough to start getting a handle on what it would likely cost to get such a company testing the uncontested spaces you found previously.
At the end of the day, the more you know about the market
for the IP and the opportunities the better decision you will make about what
to do with it in the market.
However, never forget the golden rule: Building a company is NEVER about the idea… and it’s not about the plan…
It’s ALL about the team and how the plan is Executed!
I had a money manager say something to me the other day that struck me as odd—so I thought I’d dig in a bit to see if I can rationalize his comment. He said, “We are all using the same data” when I asked him about an opinion that I heard from someone else about the stock market over the next few months. His basic point was that the market is about valuations & fundamentals” (more) and significant deviations from intrinsic value are rare, and markets usually revert rapidly to share prices commensurate with economic fundamentals. Therefore, if you use tried-and-true analysis of a company’s discounted cash flow to make your valuation decisions you will be fine over the long run.
I think the most important thing he said was “over the long run”. Because some investors do have access to fundamental data (private feeds) that others don’t and if a proprietary feed is faster and gives the investor a ‘speed advantage’ in trading then that is an unfair advantage. For example, if the price of a stock goes from $100 a share to $95, and you know that and someone else doesn’t, it makes sense to sell at $100 to a buyer when the investor knows the price is now $95. If you are a high-frequency trader and you know prices are changing millions of times a day across many securities, you can make a great deal of money.
Some investors also have access to “alternative
data” like satellite imagery to analyze information such as parking lots, web
scraping to track items such as consumer preferences and geolocation data to
analyze information such as consumer traffic to certain stores. With tools like Artificial Intelligence/Machine
Learning the new Quant 2.0 Wall-Street geniuses can take proprietary
fundamental feeds and these new alternative datasets and do Algorithmic
trading to uncover value faster than any old school hedge fund manager (example).
But at the end of the day, you don’t have access to any of these tools… So, if you can suffer through all the volatility and you trust that your money manager is on top of your portfolio’s underlying fundamentals then yes, the market can still work for you over the long run. However, just know that many strategic investors will have made a lot more money than you, on those same investments, by leveraging better tools and quant 2.0 type talent.
…you can throw all this long term investing out the window if you have a mere $7.5 billion to invest and then Bridgewater Associates can do it for you and you will likely do very well.
While the Quant’s and AI-powered algorithms can crunch numbers and make an investment decision, they can’t offer the human touch and it has become evident to me that the human touch is an essential component to long-term investing—net: you need a coach to get you through the stress of the volatility–get a good money manager.