Friday, November 6, 2015

A Farewell to Aprons

There’s a saying among futurists that once the Model-T proved that mass manufacturing of low-cost cars was possible, and that demand for them was high, it was easy to predict the eventual mass adoption of cars. And there were a few second-order effects you could then predict too, like the need for a fuel distribution system and professional car repairmen. But few people predicted that everyone having a car would cause downtown shopping districts to be replaced by big-box stores on the edge of town. This essay attempts to draw together a few trends now emerging, and predict once of those harder-to-see second order effects.

I think cooking at home is set to nearly disappear within fifteen years (other than hobbyists). Within twenty years some new construction will cease to include much of a kitchen. It will become an afterthought, like the half-bath on the first floor of your average single-family home, not a central piece of family life. The days of $50,000 kitchen remodels are soon to be over.
What’s this based on? The convergence of two trends.

Trend 1: Robotics, Sensors, and Automated Cooking

Cooking is becoming subject to full automation. Momentum Machine’s automated kitchen can produce a gourmet burger from scratch ingredients. The Innit kitchen knows the recipe and cooking instructions for thousands of meals. The Moley Robotic Chef has two arms and hands that mimic the motions of Michelin-rated chefs to reproduce any meal. Eatsa is a fully automated fast-food restaurant in San Francisco. And so on.

The technologies driving this are the advances in robotics, machine learning, and sensors. These trends are covered in depth elsewhere, but the basic idea is that all the little sensors that going into smartphones and other mobile technology is combining with robotics-driven advances to produce robotic chefs that can sense the food they are working with and cook it properly.
This technology isn’t necessarily cheap on a per-unit basis. And I don’t expect it to come to the personal home, not any time soon. A robotic kitchen in a personal home would be dead capital 22 hours out of the day, just sitting there, since our need to eat just three meals a day isn’t going to change. In stage one of the great change coming to cooking, this technology will be deployed at restaurants, destroying millions of serve-sector jobs. Fast-food restaurants will become automats. Fancy restaurants will have a wait staff in the dining room but limited personnel in the kitchen.
But that’s an easy prediction. It’s already fairly obvious.

Trend 2: Supply-chain by drone

When most people hear “drone”, many think of the little quad-copters that have consumer and professional versions. I mean those too of course, but I also mean something much broader than that. When I say drone, I mean any self-propelled, unmanned system for transport. The Predator drone delivers bombs. Self-piloted cargo ships with 10,000 containers are drones. Little dog-sized boxes on wheels for home-delivery of groceries are drones. And so forth. Form-factors will vary by local geography and cargo, but the basic idea is that delivery-robots are going to quickly become our society’s distribution system. They’re going to replace air freight, cargo ships, and long-haul trucking, and solve the last mile too. This is going to revolutionize many industries of course, put millions of drivers and pilots out of work, and allow Amazon to bring you a tube of toothpaste on a moment’s notice. Wonderful! (Well, maybe not for the drivers and pilots…)
Similarly, Amazon is leading the way in automating warehousing and packaging for delivery. They still have humans involved in picking and packing, but you better believe they’re working on automating that too. Amazon’s ultimate goal has to be “dark” warehouses that minimal human supervision.

Taken together, supply chains are going to get automated in the same way that manufacturing has already been automated. At the beginning, and perhaps for a while, humans will be involved at the loading and unloading stages of delivery, but that is a minimal amount of labor compared to the current level of human labor involved in things like running FedEx and the Postal Service. Eventually I expect products to be travel half-way around the world, from producer to consumer, without any human touching them or operating any of the vehicles it travels in.
The Combination of the Two:

I want you to consider a “freshly prepared supply chain”, on the level of a city-sized area. Consider this how will reduce food waste, save time and effort for consumers, and offer a great variety of food items for (relatively) immediate consumption.
Stage 1 is that restaurants begin to automate their kitchens, lead by national chains but eventually including locally-owned restaurants. Eatsa is already fully automated, but this will spread quickly to established chains. The Momentum Machines burger-maker is an obvious good fit for burger joints like Five Guys. If not Five Guys specifically, a competitor. Similar machines will be developed for pizza, pasta, and so forth. When a high-throughput machine for commonly consumed items is not available, a highly-automated kitchen combining Moley’s chef-arms and Innit’s technology will allow a few low-skill employees to produce large numbers of carefully prepared food items.

Happening at the same time is the rollout of general delivery companies for prepared food, like Uber Eats. Currently Uber Eats uses human drivers in traditional cars, but Uber CEO Travis Kalanick has been completely transparent about his intention to buy and own self-driving cars as soon as they’re available. And that’s by road. By air we are seeing Google Wing and Amazon Prime Air as leading the way in local delivery by drone.
Within five years, when delivery by drone and self-driving car is common, initially we will see a mass adoption of meal delivery via App. I expect that websites like Seamless will see some very good years in the near future if they adapt to this, and there’s no reason to think they won’t. But the less obvious play is in managing the supply chain behind the restaurants. The key insight here is that there’s no reason the restaurant a meal is ordered from has to prepare all the food it sells. Preparation specialization can happen at a metro-regional level, as long as it is within the range of common air drones—or even further, if the item refrigerates well. Imagine one kitchen in a city-region that produces the best puddings, or cream sauces. They might just produce a few ingredients, or common side items like salads or fresh-baked bread.

This might sound expensive, and something only the rich will participate in, but I imagine it will be the opposite of that. Momentum Machine’s burger-maker makes “gourmet quality” burgers with fresh ingredients for the same price as a McDonalds burger. The Eatsa automated restaurant provides fresh bowls of food for the same price of a McDonalds combo meal. And McDonalds itself can lower prices from its current price-level by replacing staff with automated versions of its kitchens.
There are other cost-savings too, besides automating human labor. An automated kitchen can be set up in the warehouse part of town, and pay warehouse rent. It doesn’t need to be downtown to serve a region, because the drones take care of bringing food to where the people are. Further, food items that are currently rejected by buyers for grocery stores for cosmetic reasons, and then trashed, can be used by the meal prep supply chain (and purchased from farmers at a discount to the "pretty" food). Supply-chain management software will use items before they wilt or expire much more efficiently than the average American consumer, who throws away nearly half their food every year.

A second source of cost-savings will be the value customer. Right now the profit margin on the sale of a bowl of rice and beans is too low for traditional eateries if it’s sold near cost. Restaurants want to sell high-value items like cocktails, wine, and steaks. An automated supply chain without wait staff will eventually realize that a family-sized portion of rice and beans, plus some vegetables, can be acquired and prepared for less than $1, and sold at a “mere” 100% mark-up. Meatballs or other proteins can be added as a value-add item, but aren’t necessary for human nutrition, and thus freshly prepared but simple meals will be available to nearly any American.
The greatest cost-savings of all however is time. Nearly a hundred years ago much of the work that went into maintaining a home was automated with the invention of the electric dishwasher, vacuum, washing machine, and dryer. The home-manufacture of clothes, once common, was replaced by the Sears catalog, and later the department store. The last two chores remaining that Americans spend the majority of their time on are folding laundry and preparing meals. Automating those away will produce tremendous improvements in quality of life, especially for families who do not have an adult at home full-time or part-time to prepare meals. The harried working-parents who currently take their kids to McDonalds will appreciate the convenience of a meal being brought to the home, ordered through an app as they commute home. I suspect it will be irresistible.

So, to recap, I will draw your attention again to a few key points. The drone supply chain will be able to distribute freshly prepared foods quickly and conveniently anywhere in a metro region within 10 minutes or so (both to consumers and middle-man kitchens). Automated kitchens will be able to consistently produce well prepared meals in a high-volume manner. These meals will be tastier than most meals prepared at home, and probably for about the same cost as groceries at the store (just like how Costco rotisserie chickens are cheaper than whole raw chickens) – and that's before accounting for convenience and time saved. Eventually this will lead to consumers losing the skill to prepare meals (just as most of us cannot sew clothes), and the family kitchen will probably be relegated to milk and breakfast cereal, plus a microwave for reheating leftovers. Kitchens will become smaller, people will spend less money on them, and the “social center” of the home will move from the kitchen to the dining areas – much like the aristocracy of previous centuries.

Tuesday, June 16, 2015

Do colored coins make 51%-attacks inevitable?

UPDATE: I have updated the post by adding some points from Peter Todd at the bottom. The rest of the post remains as originally written.

I have enough posts on Bitcoin that it should be obvious that I am "pro" Bitcoin. But I am also a skeptic, and I seek out evidence of my beliefs being wrong. It's the only way to minimize mistakes in life. Unfortunately for my Bitcoin fandom, most of Bitcoin's critics either don't understand how Bitcoin works or they don't understand the current banking system well. Or both! But I have just read what I think is the most cogent and convincing critique of Bitcoin's limitations from the Clearmatics blog.

Now, Clearmatics is in the distributed ledger space, and they have a product that competes with Bitcoin. So some might dismiss their arguments as motivated reasoning. But that would be foolish. The argument, evaluated on its own merits, is quite sound.

The core insight of Clearmatic's argument is that colored coins are technically possible but it would be a disaster to implement them at significant scale. The reason is that Bitcoin's ledger is not protected by cryptography. Bitcoin is protected by game theory, and colored coins change the rules of the game.

The Bitcoin network is maintained and verified by its miners. The miners compete against each other to verify blocks of transactions and add them to Bitcoin's block chain. Anyone can set up a miner and start broadcasting blocks though, including fraudulent blocks. To defend against this sort of fraud, Bitcoin's nodes and wallets follow the rule that whichever block chain is longer is deemed authoritative, and to ignore all other block chains. It is merely assumed that miners are too diverse to coordinate a conspiracy against the network, and thus non-conspirators always have more aggregate computing power than any one fraudster, and thus the non-conspirators' blockchain is always longest. Fraud is thus ignored.

This breaks down though if a fraudster ever amasses computing power equal to all other miners globally, plus 1%. If the fraudster's computing power is equal to 51% of more of the global network as a whole, then the fraudster's miners will produce blocks faster than the "honest" miners, and the rest of the bitcoin ecosystem (the nodes and wallets) will switch from the honest blockchain to the fraudulent blockchain. This is called a 51% attack.

51% attacks don't happen though, because the expense of doing so outweighs any benefit. The most recent figure I saw was that the cost of a 51% attack would be about $110 million. Since a 51% attack would destroy the value of Bitcoin itself (the only asset currently on the Bitcoin network), there really isn't a way to extract $110 million from the Bitcoin network before the fraud is discovered and the fraudulent blockchain abandoned by the nodes. Thus a 51% attack is always a money-losing proposition.

There are two scenarios where this game theory breaks down, one of which I have been aware of for some time. One fear I've had for a while is that a government will attack Bitcoin if it's ever deemed to be a threat to their national interest. A lot of Bitcoin's miners are already in China, for instance. If the government there deemed Bitcoin to be a material threat to their capital controls or financial system, it could seize the miners there and coordinate their efforts to assemble a 51% attack against the network. This is a theoretical threat though, and I'm not sure it would ever happen.

Clearmatics' point though is that as soon as you start using colored coins in any serious way, the payoffs of a 51% attack change. For instance, there's roughly 5.8 billion shares of APPL outstanding, so if you assigned one share per Satoshi, you'd only need 58 BTC to list the entire APPL market cap on bitcoin. And that's just one company. Global debt and equity markets have many trillions in value. You could even color Satoshis to represent large blocks of currency (say 10 million USD or EUR each) to handle daily settlements between banks.

At those prices, a $110M investment in taking control of the settlement network becomes profitable. Anyone who can track down the various miners operating the mining pools today can coordinate them into a 51% attack, transfer several billion dollars into various accounts, and then de-coordinate the miners so that the new blockchain continues forward as the "real" one.

Boom. Bitcoin is done for colored coins. The fact that this risk exists at all means no one should adopt it for this use case.

I'm still a fan of Bitcoin for what it is, but as long as this risk exists I don't think colored coins (at least for financial market use) are in its future. Perhaps they're still useful for things like door locks and rental cars, but only because those items are also too small (or too hard to aggregate a theft of) to make a 51% attack profitable. Nakamoto's design-goal of censorship resistance was achieved, but at the price of not being trustworthy with assets of significant value.

UPDATE: I reached out to Peter Todd via Twitter, and he was kind enough to respond to my queries. I think the strongest point he made is that if there is ever $trillions of value on the Bitcoin network in the form of colored coins, that would make higher mining fees possible. Users would still be paying a small percentage of their overall assets for the secure transfer, so that's bearable, and, as Peter put it, 1% of several trillion would pay for a lot of mining security.

On the other hand, in order to get higher fees, the maximum block size has to remain small. Users compete for access to block confirmations by paying fees to the miners. If blocks are too large though, there's no competition to get into them, and users can get away with paying a small fee or no fee at all. In the future as the mining reward of new Bitcoins becomes smaller over time, only miner fees would pay for mining operations. Those fees would have to be pretty high to pay for a secure network. Thus getting to trillions in value exchange is a more-or-less necessity for Bitcoin to be a viable and secure network over the long term.

I don't envy the careful balancing act the core developers must navigate to get there.

Monday, June 8, 2015

Colored Coins are here

Last November I wrote an "explainer" for Coin Center on the topic of colored coins. The basic gist of the article is that Bitcoins are a digital commodity which can be traded themselves and have a market value, but they're also a bit like blank piecea of paper. Any other financial instrument (cash, equity, debt, REIT, etc.) can be printed on them, and then traded via the blockchain. Back when I wrote the article this was more theoretical than practical, but technology and business have advanced in the last seven months. A few items-

NASDAQ has announced an experiment with private company stock on the blockchain. This means that private companies, when they issue shares to employers or early stage investors, will do so by sending the shares to a Bitcoin wallet that is colored coin compliant. The employees can also redeem their shares, or trade them on authorized secondary markets, using the same technology.

Overstock has issued their first debt instrument on the blockchain. The debt issuance this time around is limited to accredited investors, but that's a restriction of the US securities laws, not the technology. If this proves successful as a means of debt issuance, a very large market could be captured by bitcoin.

LHV Bank in Estonia has issued Euros on the blockchain. These bank obligations are supposed to be a cash substitute for local payments, to directly compete with the credit and debit card networks. Although technically not cash (because only the European Central Bank can issue Euros, and they haven't issued any to the blockchain) this instrument is probably most usefully thought of a money market fund share that trades at par. It's 1 Euro. According to the lead developer of this project, this is currently in a test phase with only 100,000 EUR in liquidity.

These are significant developments for the Bitcoin network, and address one of the key issues with widespread adoption. Among the issues that currently put Bitcoin at a disadvantage relative to the card networks or bank payments, are the volatility of the bitcoin price and the need to trade out of the bitcoin network after each transaction in order to have a currency that's commonly accepted in your local economy. With colored coins, both of those objections go away. Colored coins use only a de minimus amount of Bitcoin (fractions of a penny) to mark their value on the blockchain, so their market value is always equal to whatever financial instrument they represent (1 Euro, a $1000 bond, etc.). When you receive 53 Euro via colored coins, you have 53 Euro, and that doesn't fluctuate in your local currency (Euros).

Volatility - gone.

Need to trade off of Bitcoin to get a useful local asset - gone.

Further, colored coins keep the primary benefit of bitcoin transactions, which is irreversibility. When a merchant accepts bitcoins, it's just as much his as if he accepted cash. The customer may seek a refund for some reason, but that refund will be decided by the merchant or a Court of law, not the credit card processor. This produces a great deal of certainty which will be very attractive to merchants. "As final as cash" is a good marketing slogan for merchant adoption.

The two remaining stumbling blocks, as I see it, are privacy and fast transaction time. Let's deal with the second of those issues first.

Credit and debit card networks confirm their transactions fairly quickly, usually on the order of a few seconds (ignoring the chargeback issue). Bitcoin blocks are only confirmed on average every 10 minutes, and you want at least three confirmations to be fairly certain the transaction is accepted. This is probably fine for selling your privately held equity back to your employer, as in the NASDAQ example above, but it's obviously unacceptable for everyday shopping at the grocery store or pub.

Thankfully I think this issue will be solved thanks to the Lightning network. I'm not sure how long it will take the Lightning network to become active, but a lot of the core devs support the initiative and Cuber (the company behind LHV's Euro coins) plans to support it as soon as it's up and running. I consider this "fairly certain".

As for privacy, I'm not sure how to get there. The current network of banks and credit card companies isn't private from the banks or the government; they can see what you're doing. But at least your friends and neighbors can't. On Bitcoin, anyone can explore the blockchain. Here's the transaction for the first Overstock debt issuance, to their CEO. There are Bitcoin tumblers which provide some level of anonymity, but I'm not sure how they'd work with colored coins. You'd at least need a very liquid market in the instrument you're trading for it to work, which I guess is feasible for cash but I'm not sure about the other less liquid instruments. Those may just be a public record.

But be that as it may, I'm quite fascinated by the developments here. Essentially since the invention of banking in Venice, over-the-counter trading has been limited to bearer instruments (rare) or between banks. The idea of regular folks exchanging cash and other assets directly, over the Internet, without any institutional intermediary, and for only a nominal fee (fraction of a cent), is truly revolutionary. Not to be excessively hyperbolic, but this really will "change everything" about finance. It's a very exciting time to be alive.

Thursday, June 4, 2015

Buddy, can you spare a hand?

I was surprised this morning to see breathless headlines that a rat limb had been grown in a lab. I have been following the progress of synthetic organ generation, and to date the most advanced techniques I was aware of could only grow very thin organs like skin or bladder sacs or very small organ tissue samples, such as a small patch of liver cells suitable for drug testing but not transplant. I thought we were at least a decade away from growing full, complex organs such as a heart or kidney, and didn't even have an estimate for when we could grow something as complex as a limb (with all its various tissue types that need to connect to each other in just the right places). The ability to grow a limb would represent a quantum leap in technology.

Thus I was not surprised to learn, upon reading the paper, that they had in fact not grown a limb in the lab. Not entirely.

The chief challenge with growing artificial organs today is organizing the stem cells into the correct 3D shape. After all, an organ isn't an undifferentiated mass of cells. It has veins and arteries and functional systems that all need to be in the right place and aligned properly in respect of each other, or the thing doesn't work and quickly dies.

Currently there are three solutions for the above problem. The first one is to use a 3D-printer to "print" the cells into the correct place. This works okay for small tissue samples, but we haven't figured out how to print anything bigger than a couple millimeters. The second solution is to take a donor organ and wash away all its cells, leaving only the scaffolding (or "intercellular matrix") behind. This scaffolding can then be seeded with stem cells from the donee, and the cells (if cared for properly) will grow into the scaffolding like a vine growing up a trellis, forming a new organ. The third solution is a combination of the first two: 3D-print just the scaffolding, and then seed it with stem cells to grow in place.

This second method is how this rat limb was created. A donor limb was necessary, and then the seeded with cells. The advance (and it is a real advance) is that they were able to get all the different necessary tissues to grow nicely - bones, nerves, muscles, skin, etc. This is a good technological advance, but it doesn't free us from the need for organ donors. Alas.

The good news is that the 3D-printing of scaffolding, followed by seeding with stem cells, is coming along nicely. The most recent advance I could find quickly is the growth of this synthetic larynx. It's a promising technology that one day soon should free us from the need for organ donors entirely. But for now, limbs are still at least a decade away I'd guess.

Wednesday, June 3, 2015

Quantum Phenotype

This post continues a conversation I started on Twitter and Facebook regarding the biological basis of homosexual attraction. I have decided to respond here as long-form writing is really a better medium for discussing complex arguments. My primary interlocutor is JN, and this post will be addressed largely to him, but perhaps others will find it informative.

(Disclaimer: This post contains no political, ethical, moral, or religious conclusions. Any such insights the reader draws from it are their own. This post is simply my understanding of the current science.)

The start of this conversation was my assertion that sexual attraction is hard-coded into human physiology, and that culture/socialization may encourage or discourage our acting on that attraction, but culture cannot create a sexual orientation where none exists in the biology. My analogy for this is diet. A culture can influence what you eat, and how you eat, but only within the limits of a maximum possibility; it can't make you an herbivore. You just don't have the biology for it. JN's strongly held belief is that culture can in fact create homosexual attraction.

As a primary source, JN provided a link to this Columbia University paper which ruled out simple genetic and hormonal models of homosexual attraction, and posited that there must be socialization components to this behavior to cover the explanatory "gap" created by the genetic/hormonal explanations. Their primary reason for believing this was in opposite-sex twins (one boy, one girl) the boy was more likely to express homosexual attraction as an adolescent if he had no older brother, but showed the same odds of expressing homosexual attraction as anyone else if he did have an older brother. The presence of an older brother obviously cannot effect uterine environment or genetics, so the conclusion was that the older brother provided a social role model that guided the boy-twin away from homosexual attraction.

In opposition to this paper, Wikipedia provides a lengthy list of physiological markers which are different between gay and straight members of both genders. There are differences in brain structures, finger lengths, startle responses, handedness, hair-whorl direction, and so forth between gay and straight populations, and in many of these categories the homosexual shows characteristics associated with a heterosexual of the opposite gender. Put simply, there is no way culture or socialization can change the length of your fingers the shape of your cerebral lobes, especially so when this markers are present prior to birth. It is "unpossible".

So where does that leave us? The Columbia paper rules out a simple genetic explanation of homosexual attraction, and the existence of physiological markers rules out culture and socialization.

I believe that the Columbia paper is mistaken in two respects. Firstly, they only measure self-reported attraction, not biological markers. And secondly, they used the wrong model of how genetics work. I don't blame for that though, as the paper was written in 2001 and we have learned a lot about genomics in the last decade.

Firstly, let's dispense with the self-reporting. We've known since the Kinsey Study that as much as 10% of the population may engage in homosexual activity at least once during their adult years. Sexuality isn't an on/off switch between gay and straight; there's a range between the two, with individuals reporting a varying degree of bisexuality. It makes perfect sense that if a person is biologically bisexual (but not strongly so, maybe a 1 or 2 on the Kinsey Scale) that culture or socialization can influence whether they explore those feelings. That would explain entirely how birth order could affect self-reported feelings of attraction.

As for the genetic model, we have learned in the last decade that the old Mendel model of discrete genes is false. Craig Venter (who won the Human Genome Project prize by sequencing his own DNA, along with several others) had this to say about his own genetics:
"I found out that I have a high probability of having blue eyes," the blue-eyed Venter said in a telephone interview.

"You can't even tell with 100 percent accuracy if I would have blue eyes, looking at my genetic code," he laughed. "We all thought that would be simple."
Craig Venter was born with blue eyes. They never changed to any other color at any point during his life. Culture and socialization had nothing to do with it, any more than it did the shape of his nose. But his DNA doesn't say for certain his eyes would be blue either, only that it was a probability. And this is the difference between genotype and phenotype.

Your DNA is your genotype. It says what's probable, but doesn't lock in hard-coded certainties in all respects. All it does it set the beginning state of an incredibly complex and self-organizing dance of molecules that turns two sex cells into a zygote and eventually a baby. But this process is not a fancy clock following set steps, it's a big messy chemistry bath subject to hormonal signals, uterine environment, and pure random chance.

By the time you're born though, the roulette wheels have all stopped spinning. Your cerebral lobes are either symmetrical, or not. Your index and ring fingers are either the same length, or not. There's no going back, and there's no socialization that will change them. To borrow an analogy from quantum physics, DNA is the quantum state of probability that existed before your conception, and your phenotype is the observed result - and thereafter fixed. The only thing society can do is encourage or discourage you from acting on your phenotype's existing and hard-coded predisposition to various behaviors.

Sidebar: What about the Greeks?

This section is primarily editorial in nature, and isn't based on much science (collecting firm data from 2,500 years ago just isn't possible). It addresses the point some people raise about how some societies (such as the Ancient Greeks) saw widespread homosexual activity, far greater than the 10% of the population found by the Kinsey Study to be at least partially bisexual. The argument here is that culture can in fact create homosexual attraction, despite everything I said above. I mean, haven't you read The Symposium by Plato?

Yes, I have. And I don't find Plato to be a trustworthy narrator. Rather than high-minded ideals of love and attraction, I'm reminded more of American prisons and NAMBLA apologists.

Among gays, there is a distribution of individuals who prefer top, bottom, and versatile positions during sex. There's no scientific consensus on what exactly the distribution is, or how culture may effect it, but the existence of these preferences are beyond dispute. I would call them "common knowledge" among the gay community, and I believe my gay friends when they tell me they have such preferences.

No such distribution is observed in the Ancient Greek tradition of paiderasteia. The older male is always the top, and the younger male always the bottom. If these were actual homosexual relationships the reverse would be true at least half the time. The practice of paiderasteia (both in Ancient Greece and in primitive tribes in Papa New Guinea, who have been studied by modern sociologists) is only consistent in my mind with institutionalized sexual abuse. The young men studied in primitive tribes show biological markers of abuse too, even where their culture say it's "Okay" for older men to do those things. They show stress markers, and are not engaging in the practice joyfully. If we had a time machine, I'd bet $1,000 we'd find the same among the Ancient Greeks.

The male sex drive is very strong, and if access to females is restricted (whether by the social rules of Athens, or because the male is locked up in prison, or because he's a shepherd alone with his sheep for weeks at a time), then it will find outlets by other means. The Ancient Greeks are only unique, in my mind, with the lengths they went to to romanticize and justify the practice.

Thursday, May 28, 2015

Our Vegan Future

I take no pleasure in this prediction. I am not a vegan, nor do I have any strong wish to become one. And yet, I think a lot of people will be vegans in the near future for simple reasons of technology, cost, and environmental sustainability.

A popular talking point among vegans is that a calorie of animal product is much more expensive in terms of land, labor and energy resources to produce than a calorie of plant food. And this is true. It's certainly less efficient to grow corn, feed it to a cow, and then eat the cow. Cut out the middle cow and eat the corn directly! But of course, corn (and any other plant) doesn't taste like cow, and to date the rich West has been willing to pay the premium necessary to purchase the flavor and texture of meat.

Several trends however are coming together which I think will move a substantial amount of our consumption away from animal products towards plant products. From there, changes in politics will finish off animal farming as a major industry.

The first trend is merely the limitation of land resources. Already, 26% of the Earth's surface is devoted to grazing, and 1/3 of our arable land is used to grow crops fed to food animals. I don't know what the Earth's sustainable level of beef production is, but seeing how over-grazing is already a severe problem in some areas, it may well be less than the level we have now. Combine this fixed supply with a growing population and we should expect the price of animal products to rise over time.

The second trend is the improving sophistication of "fake" animal products. Muufri is making "animal free" milk that matches real cow milk protein-for-protein and fatty-acid-for-fatty-acid, just from plant sources. Impossible Foods is doing the same for meat, even going so far as to replicate the hemoglobin in blood from plant sources. The promise of both of these companies, and others working in the same field, is to deliver plant-based products which are indistinguishable from their animal-based counterparts, but at lower cost. Already in blind experiments, according to Impossible Food's research, professional chefs are unable to tell the difference between their products and the real thing during preparation and cooking, and customers can't tell the difference either. And the product is improving from there, since they have precise control over the chemistry. This isn't your uncle's tofu burger.

The third trend is the direct manipulation of DNA using technology like CRISPR. In 2013, the biotech startup Pronutria came out of stealth and David Berry (one of their founders) gave a Google Solve For X talk on their technology. Basically, they created a library of single-celled creatures and DNA tools for modifying them to create the amino acids and vitamins necessary for human health from a continuous process algae farm. Pronutria observed that by growing these nutrients directly from algae we can provide all of the non-calorie nutrient needs (proteins and vitamins, excluding starch and fat) of the entire planet from a non-arable patch of land (or calm ocean water) the size of Rhode Island. Obviously the fresh water and energy requirements are also much lower than standard agriculture.

Taken together, the above three trends say that the costs of animal products will rise, the quality and price of their plant-based substitutes are already near-equal, and the energy and resource cost of plant-based substitutes are already lower and destined to fall much further. Over time the competitive bidding for a good steak from the global rich will drive up the cost of fixed-supply "real" meat, while simultaneously scalable biotech alternatives will come to enjoy economies of scale and learning curves, driving down their prices towards the marginal cost of energy and non-arable land (low).

Will animal farming simply go away? Not immediately. There will certainly be many people who don't want to give up their animal-based foods, and will be able to afford to keep eating it. But there's 4 billion people in Asia who want to eat well, and by the end of this century there may be just as many in Africa. There simply isn't enough grazing land on Earth to feed everyone real meat, and given a sufficiently tasty substitute (which may be even more nutritious than the real thing), the world's poor and middle class will probably go for it. Over time, as more people get used to the idea of plant-based meat and dairy products, political acceptance of the known downsides of standard agriculture (the environmental and ethnical issues surrounding animal welfare) will dry up. I can see a future where so many of the voting public becomes disconnected from eating animal-based meat that it acquires a reputation similar to fox hunting - something vaguely cruel and pointless that only eccentric rich people do. Regulations that make it increasingly expensive and rare would follow quickly from that point, in the name of animal welfare or environmental protection.

On the plus side, for those of you currently saddened by the thought of never having a "real" hot pastrami sandwich in the post-meat future, any real limit on Earth's human-population carrying capacity will be expanded out into the indeterminate future by the events described in this post. Malnutrition will be abolished anywhere supply chains and markets are reasonably functional, and we will become infinitely richer in the ultimate resource. So we'll have that going for us, which is nice.

Wednesday, May 20, 2015

Solar panels aren't computer chips

Much hay has been made in the last couple years of the steep price declines in solar panels. Here's a typically excitable piece by Noah Smith. The excitement of these posts seems to be driven by the expectation of exponential technological improvement over time. As Noah concludes:
The takeoff of solar-plus-batteries has only begun to ramp up the exponential curve
 I don't think this is the correct point of view. There is only one technology which is riding an exponential curve, and that's the miniaturization of electronics. Each time the feature size of a chip or memory register shrinks, the density of compute, memory and storage increases at an exponential rate. But most technologies don't work this way, and solar (despite being made from silicon) is not riding that curve.

Solar panels are a bulk-manufactured commodity. They aren't terribly hard to make; Elon Musk described them as being slightly simpler to make than drywall panels. And that's true. What's relevant to predicting the future of solar though is that no one predicts drywall panels to get 7% cheaper per year indefinitely. It's understood that their price is a function of their basic costs in terms of material, labor, and energy inputs, and that this can only be asymptotically approached, but not passed through (barring a technological substitute of some sort).

Batteries are similarly bulk-manufactured commodities. That's why Elon Musk isn't relying on Intel to deliver him cheaper batteries each year. He's building the Gigafactory instead because economies of scale are the only way to drive down bulk-manufactured commodity prices in the short-run. Technological price decreases happen at a much slower rate in this sector, when they happen at all. The same economies of scale are why Solar City bought Silevo and plan to product a 1 GW/year production plant in New York.

So will Gigafactories help make solar panels cheaper? Yes, but only once - and it already happened. The solar panel gigafactories were built in China. I'm going to show you a chart now and let you guess when they were built:
This huge build-out in industrial capacity is what drove down prices. However, there was over-investment (Over-investment in China?!?! Sacre bleu!) and the spot price of solar panels were driven below production costs, leading to massive losses. Now the butcher's bill is getting paid.

The #1 solar panel module maker in the world is Yingli, but they got there by selling their panels at a loss for the last four years, and now they're on the verge of bankruptcy. Baoding Tianwei, a State-owned firm in China that is a supplier to solar companies, has defaulted on its bonds, blaming the glut of supply in the solar market. These two firms are not alone, I'm sure.

There will be a retrenchment in the production of solar panels as firms exit the business. Supply will contract until it meets up with demand again and the producers are profitable. The steep price declines we have seen in the last ten years won't be totally reversed, but they won't be totally kept either. Not in the short term.

So is there hope for future price declines? Sure, as long as you keep your expectations modest. The Solar City-Silevo merger I mentioned earlier is basically a technology play, relying on Silevo's superior technology to improve solar efficiency at the same price as today's panels. That's a good thing, and the price per watt will fall from that. But we shouldn't expect exponential curves in this industry. It's going to be a long, hard, slog.

Tuesday, May 19, 2015

21 -- Part 2

My thinking on 21 has evolved a little bit over the first 24 hours. I'm going to lay out three scenarios for how I see the possibilities here.

First, to recap -

  1. 21 has built a single-core mining ASIC that can be embedded in any electronic or home appliance. So if your mobile phone is plugged in, fully charged, and on Wifi, it can do some hashing and contribute to a virtual mining pool operated by 21 (I'm assuming they have the sense to not run mining operations while you're out and about, killing battery life and using up your data). This will produce a small stream of Satoshis for the device's use. 
  2. The 21 Chip's mining process would be most cost-efficient in appliances that need to generate heat anyway, such as slow-cookers or washing machines. The chip can just be part of the heating element. But would Satoshis in your rice cooker be useful? I would think they'd be most useful in interactive devices such as your phone, tablet, home PC, or smart TV, so the economics are worse.
  3. The 21 Chip's mining process will never be as efficient as running your own (or hosted) mining rig in a low-cost electricity market, because of revenue splitting and local variance in the cost of electricity. Essentially you're spending $4 in electricity to get $1 in BTC. The $3 is the "cost" of having automatic, embedded Bitcoin in all of your devices. Convenience over frugality.
  4. Revenue splitting isn't just between the user and 21, but also allows for payments to be made to the manufacturer the chip is in, as well as the distribution channel. 21 specifically mentions the ability for retailers or mobile carriers to configure the 21 Chips they sell such that they earn BTC over time from a user's mining activity. The actual revenue split isn't disclosed. Does 21 guarantee a minimum share of income to the end-user? That's not clear.
  5. There is the possibility that devices will be able to use their Satoshis for authentication and identification. Of all the features mentioned in the 21's blog post, this is the only one that promises any real value to consumers.
The confusing part of their blog post announcement is that there are a number of statements made which stand directly at odds with each other. For instance, they go on at length about subsidized devices and making payments to channel partners, but then say this:
Well, at 21 we are less concerned with bitcoin as a financial instrument and more interested in bitcoin as a protocol
And this:
Crucial to this is the idea that bitcoin generated by embedded mining is more convenient — and hence more valuable — than bitcoin bought at market price and manually moved over to the site of utility.
These statements seem to concede my point #3 above, which is that the BTC generated by these devices will be peanuts-small and obscenely expensive compared to just running a mining rig. How much can a mobile carrier really make off of a cheap smartphone in the developing world? A couple dollars per year, at most? That's hardly enough to meaningfully defray the costs of a new electronic device.

I take them at their word that they consider BTC more useful as a protocol than as a financial instrument. Especially at the Satoshi price level! But what to make of all the space they spend on making arguments about payment streams or making micro-payments for things? Would the average user even care about the ability to buy, mayyyybe, one free song per month from their mining rewards? I have my doubts.

Meanwhile, this quote indicates a deeper purpose, about changing the basic fabric of computing:
Conceptually, we believe that embedded mining will ultimately establish bitcoin as a fundamental system resource on par with CPU, bandwidth, hard drive space, and RAM.
And:
Towards that end, our team of PhDs in EE from MIT, Stanford, and CMU has built not just a chip, but a full technology stack around the chip — including reference devices, datasheets, a cloud backend, and software protocols.
So here's what I think: I think we are looking at a classic bait & switch, but I'm not sure yet whether the intended patsy is the consumer or the manufacturing sector. Which one it will determine whether 21 is a "good" company or an "evil" one.

Subsidies, Revenue Sharing. This is the bait. Manufacturing electronics is a famously low-margin business. Margins on RAM or chips are razor-thin, and the ability to eek out even another 1% of margin will be leapt at. The sales pitch here is easy for getting the chip included in devices. "Hey, manufacturers, how would you like to earn a perpetual revenue stream from your devices?" It's almost free money to them.

With fast manufacturer and distribution uptake, consumers will end up with devices with this functionality whether they want it or not. It'll just be there, and the revenue splitting will be done by 21 (not the device). Even if there are ways for consumers to turn it off or modify the functionality (likely requiring jail-breaking the device), most won't.

Meanwhile consumers will see higher electric bills, but probably not so much higher they'll really care. They'll just see a status indicator that they have 50 Satoshis in their wallet, or whatever.

Priming the Pump: With fast manufacturer and distribution inclusion, and a 2-3 year upgrade cycle in mobile phones, we could have 21 BitShare chips in every mobile device in the world within a couple years. This lays the groundwork for massive developer support of the new features this allows.

However, for that to happen, 21 needs to protect the consumer's revenue split from greedy manufactures and distributors. If the manufacturer can take 100% of the Satoshis that aren't taken by 21, then the end-user gets nothing, and the potential for having automatic BTC just magically appear in their hardware wallet evaporates.

This leads to three scenarios:

GOOD 
21 sees the opportunity to be the "AOL CD" of Bitcoin. Back in the 1990s average folks didn't know what the Internet was, or what it was for, or why they needed it, and it was hard for AOL to market their product, so their solution was to rain down Biblical plagues worth of free trial CDs on the people. Literally everyone in America ended up with a free trial CD (or a dozen of them) whether they wanted one or not, and a number of them put it into their computer on a "Why the hell not?" basis.

This could be a similar situation. Because of the financial incentive to manufacturers to include the 21 BitShare chip in every device, the public gets BTC on their mobile devices whether they want any or not. And each week or month a few of them will start using them on the same "Why not?" basis. This strategy is about getting BTC in front of as many people as possible to bootstrap Bitcoin to being the global financial protocol its boosters claim it's capable of being.

This scenario accepts that people won't voluntarily buy phones with BTC and load them up with Satoshis, mostly due to a lack of consumer education, but believes that the value is there if they can just get people to use it. Obviously this depends on the authentication and smart contract value of having a few Satoshis on hand actually being valuable to consumers. Because if it isn't, all they get is costs.

EVIL
21 sees the opportunity to control a very large mining pool, and they're offering a cut of those profits to all manufacturers and distribution channels, and getting unsuspecting consumers to pay for it with "free" electricity. This way 21 can undercut the mining cost structure of all the other professional miners who have an electric bill they need to pay.

Consumers get a slightly cheaper device (maybe), but there's no minimum reward-split for the end-user; all they get out of this scenario is slightly higher electric bills. The smart contracts features never appear, or they appear but the manufacturer takes 100% of the BTC earned, so the end-user never has any BTC to sign smart contracts with. There's no benefit to end-users here. It's hoped by 21 that the electric bill cost bump will be low enough to be a "tolerable annoyance", rather than something painful enough for consumers to revolt over.

NECESSARY EVIL
This scenario recognizes that both the GOOD and EVIL scenarios are real, but the EVIL scenario is seen as an intermediate stage to a future where consumers become educated about Bitcoins, want Bitcoins, and demand devices that share the majority of their mining rewards with the end-user, and not the distribution channel or even with 21. Also the mining chip eventually takes a back seat to the real feature that allows IoT commerce - hardware wallets in everything.

This scenario also contemplates that 21 won't be able to stamp out all other miners, so we don't have to worry about a 51% attack in the network. If the strategy is successful, competitors will appear en masse. It's not like ASIC mining chips are hard to design. Compared to a CPU or GPU, they're dirt simple. Some Chinese firms will reverse engineer the features necessary to work with whatever standards 21 has developed, and will sell those chips at a pittance. The mining pools will be as fragmented as the companies selling these chips. The share of revenue going to the chip companies will be driven down to the minimum viable margin, and it only remains to be seen whether the lion's share of the rewards go to the distributors or the end-users. Ultimately we end up in a place where every device in the world contributes a little hashing power to securing the network, and a 51% attack is physically impossible even by adversaries as well funded as the US or Chinese governments.

BUT WHICH IS IT?
Heck if I know. Frankly, reading the launch blog post by the CEO of the company, I can't help but be dismayed by how poorly written it is and how many logical inconsistencies it contains. It's a very bad start, and suggests either that they're really disorganized and have no idea what they're doing, or are actively trying to deceive people. Or maybe they're just really, really bad at explaining themselves. Who knows. We will have to wait and see what sort of contracts they actually strike in the manufacturing sector.

Monday, May 18, 2015

21

21 has announced their strategy: they've built an embeddable mining ASIC  that can fit inside of any power envelope, such as a mobile phone or home router. Or even a toaster, I guess. Their revenue model is to sell the chips to manufacturers and to also collect a portion of the mining revenue the chips generate. Although 21's announcement doesn't name the revenue share number, previous rumors pegged it at 75% for 21 and 25% for the user. So in essence, you get a small discount up front in exchange for a higher electric bill, forever. This is worse than a high-APR loan from Sears for furniture. That at least ends at some point.

Of course, the announcement from 21 says that the manufacturer can also keep a share of that revenue for itself, so the true end-user could get much less than 25%. Maybe even 0%. But they will still have to pay that electric bill.

How does this pass the "good business" sniff test? Does anyone at 21 actually care about delivering value to the end users?

Essentially what users are paying for is the chip and their electric bill, and what they get is a very small stream of Satoshis (or possibly zero Satoshis). Presumably 21 is running some kind of virtual mining pool that will smooth out the mining rewards, so that everyone (or at least the manufacturer) get some small level of income. But this mining pool also presents a risk for Bitcoin. If 21 succeeds in getting its chips embedded all over the place, and they mine Bitcoins using "free" electricity from their users, then they can out-compete all other miners and possibly put them out of business. And if they approach or exceed 51% of the hashrate with this pool, Bitcoin is no longer decentralized. It would be controlled by a single US firm that could be targeted (overtly or covertly) by the US Government.

I can't say I'm excited about the ethics of the business model either. To the end user who's spending the Satoshis, they're paying a huge mark-up for the Bitcoins compared to buying their own mining rig in exchange for the convenience of having a few Satoshis on hand at all times. Between losing 75% or more of the bitcoins they mine to 21 and the company they bought the hardware from, and the fact that most end-users have more expensive electricity than the industrial miners located in Oregon or Iceland, they're essentially paying a 400% tax on the mining work they do. Is that worth it? Is that a good deal for end users? I guess it depends on how much they value convenience, and how their mining costs relate to the open market cost of BTC. But it's not obviously a deal to me.

I can see this being a good business for 21 though, at least in the short term. They sell the chips and also get the recurring revenue from the mining work for as long as the device is plugged in somewhere. But is it good for users? It feels predatory to me.

Let's look at the use-cases proposed by 21 in their blog post. Several of them are exactly the same thing, phrased different ways. I have thus grouped them categorically:

Category 1: Overpaying for Bitcoins:

  • Micropayments. You can conveniently spend Satoshis on stuff that you overpaid for in the first place. Convenience is nice, but isn't there a way to solve convenience without overcharging people for the medium of exchange? Also, the amount of money you spend is inherently constrained by the revenue generated by the 21 chip. How many articles per month or downloaded songs is that? Can't be many.
  • Devices pay for services. Same as above, convenience for being badly over-charged, but instead of content your buying some service. A mesh network perhaps. Mesh networks are great. Isn't there a way to pay for them without paying the 21 tax?
Category 2: Actually not even a working feature
  • Machine Twitter. Only works if they ever get Sidechains to work. I'm ignoring this for now.
Category 3: Extracting revenue from users
  • Devices can pay channel partners. This is a way for manufacturers, retailers, and mobile carriers to continue to extract mining revenue from users long after the device has been sold, or is even under warranty. Yay! I'm excited about this possibility, aren't you?
  • Silicon-as-a-service. This seems ridiculous. There's no way home router's single-core miner produces enough BTC revenue to pay for the miner. Maybe it's a little cheaper, like Amazon's Kindles with Ads, but not free. And the user pays for the additional electricity used, so it's more like a high-APR "payment plan" that never has they hope of getting paid off. 21 just keeps taking your bitcoins, forever.
  • Bitcoin subsidized devices. This is the exact same point as previous. Why is it even a separate item on their list?
Category 4: Possibly Useful
  • Decentralized Device Authentication. This is possibly useful. In fact, it's the only part of the business model that seems useful to the end user. Unfortunately it's also the least fleshed out, and depends on developers of future services using the feature.
Category 4 is the only category of "features" which I consider to really be a good deal for the user. It's only useful though with widespread adoption. And widespread adoption only happens if manufacturers build the chip in. And their incentive for including the chip is only as strong as the revenue they can extract from the Category 3 features, which steals electricity from end-users. And if the channel participants keep too much of the Satoshi stream, the end user has nothing to spend, making the whole point of 21's existence worthless. I hope 21 has the sense to cap how much revenue the channels can take, otherwise the end-user ends up with nothing except a couple bucks knocked off the sticker price and a higher electric bill.

Also, let's think about the long term implications. If 21 succeeds, they will attract competitors. Mining ASICs are dirt simple to engineer. There will be a Chinese competitor to 21 within a year or two at most, and they'll compete for manufacturer share by offering a larger revenue share from the mining work. Eventually the manufacturers will have most of the bargaining power here, as they control the relationship with the end-users buying the devices. Meanwhile end users will get the mining chip that's best for the manufacturing company, not the end-user, since it's a an add-on feature most people won't care about. I don't see the situation improving for end-users over time, but it may get worse for 21.

If there's a best possible future though, it's this: 21 succeeds, Category 2 gets people using Bitcoin, and thus Category 4 gets a lot of support, and the convenience of adding Bitcoins to your wallet purchased off the open market improves, which means a real micro-payments market appears, and informed users start demanding "fair" division of bitcoins from their mining work - and some manufacturers actually deliver on that. In that future, we pay up front a fair price for our ASIC, and in return get lots of cool services and also get to keep and spend most of the Satoshis our mining work actually generates (minus only the pool operator's fee, which would be subject to market competition).

A lot of things have to go right for that to happen though, and I can just as easily see a situation where the channels take 100% of the profit from these chips and users get nothing but electric bills.

Friday, May 15, 2015

SpaceChain: A decentralized, private space program, powered by Bitcoin

In what seems to be a perfect convergence of my interests, SpaceChain (which is headed by Iman Mirboki of Sweden) is building a decentralized space program using open-source software and hardware designs, bitcoin, and crowd-sourcing. Their goals are ambitious, including putting a satellite into permanent orbit using a rocket they have built themselves. They are also integrating Bitcoin as a means for space assets (satellites, rockets, etc.) to exchange payments for services. Their proof of this concept is to launch both a satellite cargo and a fuel cargo in separate modules and then have the satellite cargo pay for a fuel transfer.

I can't tell you how exciting this is. Many folks in the Bitcoin and cryptocurrency space have talking about machine-to-machine payments as a use case for Bitcoin, and many in the space access community know that on-orbit refueling is a critical step into expanding human activity beyond low-Earth orbit in an economical way. Combining these two trends is beyond cutting edge.

In the near term I have realistic expectations of SpaceChain, which is that they won't accomplish much too quickly. Such poorly funded efforts rarely do. But still, the existence of the program at all is a testament to how the Internet + Bitcoin is changing (and will continue to change) our world, along with advances we are seeing in electronics, sensors, and robotics.

For nearly 25 years now the Internet has allowed people to organize thoughts and software code around the world, and it gave us things that couldn't have existed prior to it, like Linux and Wikipedia. Now with Bitcoin it's finally as easy to coordinate and aggregate economic value and interest on a global scale, and this will mean that decentralized teams (or even just aggregations of fans) will be able to do things requiring money that it used to require corporations or governments to organize. Kickstarter is only a first taste of decentralized economic activity, built as it is on top of the old credit card networks.

Decentralized manufacturing (3D-printing and home CNC-mills) plus advances in commodity miniature electronics thrown off by the smartphone supply chain will be the other half of this story. With the democratization of the tools of manufacturing, the hardest part of manufacturing that remains is figuring out how to build something. When that knowledge can be aggregated and shared for free across the globe, the barriers to entry and experimentation fall rapidly.

Wednesday, May 6, 2015

FinCEN fines Ripple $700k, requires remedial actions

Post updated to reflect comments from Ripple; see bottom of post.

The U.S. Financial Crimes Enforcement Network (FinCEN) has announced an enforcement action against Ripple Labs, Inc. (Ripple) and its subsidiary XRP II, LLC (XRP). This is the first enforcement action against a virtual currency company. Their findings of facts and violations include that Ripple acted as a money service business (MSB) without registering with FinCEN, and that both Ripple and XRP acted improperly. The fine is $700,000.

The rules that FinCEN found Ripple and XRP to be in willful violation of are primarily ones of anti-money laundering (AML) compliance. They did not implement their AML program quickly enough, when it was implemented it was not good enough, and a number of transactions which FinCEN deemed to warrant the filing of suspicious activity reports (SARs) did not get filed. These are the sort of violations that any exchange (whether dealing in virtual currencies or strictly fiat ones) might commit.

The remedial actions required by FinCEN are what seem to be causing the most conversation on Twitter and similar online discussion forums.

A lot of it is exactly what you'd expect. There's the $700,000 fine, a requirement that Ripple and XRP implement better AML practices, and the requirement of a third-party auditor for the next six years to verify that the AML program is in place and being followed appropriately. There's also a requirement to look back at the last three years of data available to Ripple and file any SARs that such information may warrant.

Further however the remedial actions may require (but it's not entirely clear) updates to various parts of the virtual currency software developed by Ripple. This is not the software and business logic used internally at Ripple or XRP to run their businesses, but the software that runs the Ripple protocol and network.

Let's take a look at some of the requirements specifically.
Within 30 days of the date of this agreement, Ripple Labs and XRP II will move its service known as Ripple Trade (formerly known as Ripple Wallet, which allows end users to interact with the Ripple protocol to view and manage their XRP and fiat currency balances), and any such functional equivalent, to a money services business that is registered with FinCEN (the “Ripple Trade MSB”).
So the wallet needs to be run out of the FinCEN registered MSB. I'm surprised this isn't already the case, but maybe it's a question of formally moving ownership of Ripple Trade from Ripple to XRP (the latter is registered currently).
Users of Ripple Trade (which will include all users registering after the date of this agreement and any existing users who register at the request of Ripple Labs) will be required to submit customer identification information, as required under the rules governing money services businesses, to the Ripple Trade MSB;
This seems a lot like what Coinbase and Circle already require today in the Bitcoin world. One thing that's different however is that, per the Ripple Terms of Use (see Item 3), Ripple Trade does not have access to user keys. They're a web-based wallet like Blockchain.info. This indicates that while wallets that lack keys to your accounts may not have the same level of fiduciary obligations as a hosted wallet like Coinbase, they still have AML obligations under FinCEN rules.
After 180 days of the date of this agreement, Ripple Labs will (1) prevent any existing Ripple Trade user who has not transferred to a wallet or account with customer identification information from accessing the Ripple protocol through the Ripple Trade client, and (2) not otherwise provide any support of any kind to such a user in accessing the Ripple protocol.
So, 180 days from now, any Ripple Trade account which is not AML-compliant will be frozen. Interestingly this requirement doesn't just say that they cannot trade with Ripple Trade. It says the account cannot be allowed access to the Ripple protocol. If they cannot access the protocol then they cannot move the money to another wallet or a personal wallet. The money is just stuck there until they meet the reporting requirements.
8. Enhancements to Ripple Protocol: Within 60 days, Ripple Labs, XRP II, and the Ripple Trade MSB will improve, and upon request provide any information requested by FinCEN or the U.S. Attorney’s Office as to the use and improvement of, existing analytical tools applicable to the Ripple protocol, including: (1) reporting regarding any counterparty using the Ripple protocol; (2) reporting as to the flow of funds within the Ripple protocol; and (3) reporting regarding the degree of separation. 
I am admittedly not as familiar with the Ripple network as I am with Bitcoin. I'm not sure what sort of blockchain analysis (if that's even the right word) is possible on Ripple. But this requirement seems to be saying that Ripple will build the tools necessary to do blockchain analysis that can positively identify any counterparty and follow the flow of funds between parties. No doubt there will be some connection between these tools and the AML-required customer information collected.
10. Transaction Monitoring: Ripple Labs will institute AML programmatic transaction monitoring across the entire Ripple protocol, and will report the results of such monitoring to the U.S. Attorney’s Office, FinCEN, and any other law enforcement or regulatory agency upon request. The monitoring and reporting must include, at a minimum: (a) risk rating of accounts based on the particular gateway used; (b) dynamic risk tools to facilitate investigation of suspicious activity, including counterparty reporting, flow of funds reporting, account flagging of suspicious accounts, and degrees of separation reporting; and (c) other reports of protocol-wide activity regarding any unlawful activity
More monitoring and blockchain analysis, this time (I think) for "programmaticaly" identifying accounts and activity which warrant a SAR being filed.

An important thing to note is that both Section 8 and Section 10 are only possible because Ripple runs on a shared, transparent ledger (like Bitcoin). There is no equivalent to "monitor across the entire protocol" in regular banking, because there is no shared ledger. Everything has to happen at the  individual bank level. FinCEN is leveraging the existence of the shared ledger to get better information than they normally could get from any single firm, and they're having Ripple do the development work form them. This certainly shows they understand how the new technology is different from the old technology, and how it creates opportunities for FinCEN as a regulator.

Okay, last section:
11. Funds Travel Rule and Funds Transfer Rule: XRP II and the Ripple Trade MSB will ensure, or continue to ensure, that all transactions made using XRP II, Ripple Trade, or Ripple Wallet will be, or will continue to be, in compliance with the Funds Transfer Rule and the Funds Travel Rule. 
The "Funds Transfer and Travel Rules" requires that one financial institution (including, in this case, Ripple Trade and Ripple Wallet) must pass on certain information to the next institution (or wallet) for certain kinds of transactions. Essentially, Ripple can't just send your XRP to Joe Wallet Co., it also has to send them your name and address.

The requirement that all "transactions" be compliant with the travel rule implies to me that Ripple Trade will have to be updated to to disallow transactions where the requirements are not net. This further implies that assets can only be transferred out of Ripple Trade or Ripple Wallet if they are being sent to another compliant wallet.

A question of what a "compliant wallet" is remains open. Is it sufficient that a wallet promise to provide and receive the necessary information? What if it becomes common knowledge that one particular wallet host is lousy about keeping records? I doubt FinCEN would be happy with that, which further implies some sort of whitelist for wallets.

[SEE UPDATE BELOW] The last point to raise is this speech by FinCEN Director Calvery, given today. In it she says:
Ripple Labs will also undertake certain enhancements to the Ripple Protocol to appropriately monitor all future transactions.
My initial (and definitely not legal advice!) reading of Ripple's remedial action requirements didn't see any requirement that the protocol be updated - despite Section 8 being called "Enhancements to Ripple Protocol". The closest change I see to a protocol-level change would be Section 11, but even the Travel rule could be enforced at the wallet level (not the protocol level). The actual text of the remedies implies enhancements to monitoring software and wallets. But Director Calvery seems to think the protocol will be changed. I could plausibly speculate a number of reasons for this being the case (loose language, last minute changes to the settlement, technical non-clarity at FinCEN, etc.), but right now it's just not clear what this means.

UPDATE: The BitBeat blog reports that the protocol will not be updated after all. The following statements come from Ripple:
All that Ripple had agreed to, [Ripple Labs’ new Bank Secrecy Act officer, Antoinette O’Gorman] said, was to build enhanced “analytical transaction monitoring tools for monitoring transactions across the protocol” and to furnish information drawn from that monitoring to U.S. authorities upon request. The changes had “nothing to do with the protocol itself,” she said.
These monitoring tools are secondary applications that anyone could have built to analyze the flow of data across the publicly transparent ledger of Ripple transactions, she said.
Okay, so it's blockchain analysis tools, as I speculated above. This reinforces my point that FinCEN is taking advantage of the unique nature of transparent, decentralized ledgers to see further onto the flow of funds than a bank-by-bank approach would allow.

The other interesting point is this:
Addressing another contentious point, Ms. Gorman said her company had argued that Ripple Trade, a wallet application with which people can view and manage their balances of XRP, Ripple’s native currency, should not be registered as a money service business, or MSB, under FinCEN rules because it was merely a software tool without power to take custody of funds or directly exchange currency. However, FinCEN was insistent, demanding that Ripple Trade be migrated to a properly registered MSB, which means that its users must submit customer identification information.
I would like to know more about FinCEN's basis for this demand. Is FinCEN is going down the road that software which allows financial activity must be hosted by regulated companies (where users must disclose identifying information)? Or did they just ask for what they thought they could get? With the issues around whether the protocol is being changed being resolved in the negative, this part is now the biggest open question raised by this action.

Tuesday, April 28, 2015

"Drones" are a big category

Uber has announced food delivery in four cities. Uber is building self-driving cars. SpaceX has an autonomous spaceport drone ship. The FAA is (agonizingly slowly) creating rules for autonomous flying drones, while laws for self-driving cars are being handled at the State level. Rolls Royce is building an unmanned cargo ship. FedEx wants to build drone cargo aircraft to air-freight cargo from Asia. Drug smugglers are using unmanned submarines.

What do all these stories have to do with each other? It's that drones are bigger than just the aerial robots that hobbyists and the military use. This is a huge, tectonic change that is going to affect every part of our society that moves physical goods between two points. Amazon is leading in several categories (including inside their own warehouses) but there's so many possible use-cases it should be assumed there will be many niche players. Uber's food delivery service is just the tip of the iceberg.

I think a good mental model for the coming "moving atoms infrastructure" is the Internet. The Internet is a dumb pipe for moving bits over TCP/IP. Once drones because a sufficiently inter-connected network, it will essentially become a dumb pipe for moving packages. The centralized systems currently managed by FedEx and UPS will be replaced by decentralized systems that route packages just like routers treat packets.

This may seem like an unlikely scenario (who's the ARPANET in this scenario?), but I think it will become necessary. As drones take over moving more things, they will start to have to cooperate. UPS might not pick up at your location, so you hire an Uber to take your package to the nearest UPS store. Then there's a hand-off. How does that work? Uber and UPS will have to create a common protocol for exchanging payment, as well as shipping and handling instructions. This protocol gets adopted by others too (maybe Amazon and Wal*Mart use it to accept deliveries at their warehouses from UPS, FedEx, or a new competitor), and pretty soon you've got SMTP for boxes. Even if a company like Amazon wants to be the whole market, they can't. "Moving things" is too big for any one organization, so protocols of cooperation will be necessary.

And the fun part will be when someone creates "Tor for boxes", for ensuring privacy in shipping. There could be even be companies that "ensure discretion" by sending an unmarked drone (aerial or wheeled, depending on your needs) to your location for pick-up and then hand-off the box into the shipping network at a random location to hide it in the stream of commerce. The DEA will have a fun time with that one.

Friday, April 24, 2015

Could Uber make Orbitz obsolete?

Uber is a company that gets people from point A to point B, as quickly, cheaply, and hassle-free as possible. Right now they're doing it with black car and part-time taxi services, and in the future they'll definitely add self-driving fleets of cars to the mix. They may own those self-driving cars, or they may not. I can see a future where Uber licenses their self-driving technology to car manufacturers for minimal cost. Or even for free, as long as Uber's service software is hard-coded into the car as a choice. As long as end-users use the Uber app to call the car, Uber makes money on it eventually.

Right now we think of Uber as a "intra-city transport" company, but as self-driving cars, especially high-speed self-driving cars moving in close formations to take advantage of fuel-efficiency gains, become more common the range of an Uber trip will potentially stretch much further. It's not inconceivable that users will summon a self-driving Uber to take them on a long, overnight road trip so that they're in a distant city by morning. A minivan with the middle and back seats taken out can easily fit a sleeping sofa. (I speak from experience on that point)

At that point, Uber cars will compete with Southwest and Greyhound for intra-continental travel. Short range will always be based on car travel, and coast-to-coast will probably be air-dominant, but there will be a mid-range trip length where it could go either way, especially depending on the particular consumer's sensitivity to price and personal time-value.

At that point, I can see a future where the Uber app asks for a destination and then presents consumers with competing itineraries, all provided by Uber. It can drive you directly from New York to Pittsburgh, or it can drive to you to the airport, arrange for a ticket, and pick you up at the other end in another Uber car. Point to point transport either way, just choose your trade-off between cost and time.

And what role is left for Orbitz or Priceline in this future? Well, hotels and other destinations, I guess. It's not like AirBnB will eat 100% of that market.

Obviously the same logic above applies to Lyft and other Uber competitors, if they think of it. It will be interesting to see which of these companies is first to strike a relationship with an air carrier.

Thursday, April 23, 2015

Google Fi...rmware

Google has launched their mobile service. There's coverage all over the place, but most of it seems to miss the point. Ignore the Wifi, that's mostly irrelevant outside the home. Ignore the pricing. It's a bit cheaper than Verizon in the US, and about the same at T-Mobile, but that isn't the interesting part. The key feature of Google Fi is that it inserts a firmware layer between the end-user and the wireless networks.

The company that controls the end-user relationship, dictates the market. Steve Jobs learned this when he first tried to make a phone and it resulted in the execrable Motorola Rokr. Jobs learned from that experience that he needed to find a mobile carrier desperate enough gain market share that it would let him make the phone the way he wanted, and Cingular was that phone company. Only after the iPhone threatened to eat the entire mobile market did the other carriers turn to Android to stave off irrelevancy.

Well it's happening again, only this time Google has found desperate partners in T-Mobile and Sprint. Where the iPhone was Apple taking control over the user interface and hardware design, Google Fi takes control over the billing relationship and decides which wireless connection to route data over. It moves the control of tech companies deeper into the device, to the layer of the firmware and the SIM card itself, essentially pushing the mobile carrier off of its toehold in the device market. Going forward, the mobile carriers will get revenue when Google Fi decides to route traffic to them, and not otherwise.

With Google Fi, Google controls the wireless service that the user is getting data from, and much becomes possible in the near future. Google Fi will be able to easily integrate future networks, like Google Loon, the SpaceX-Google satellite initiative, or even whitespace or mesh networks. Google will also be able to intelligently route data over multiple networks using Multipath TCP so that Google Fi will (from the end-user's perspective) be faster than any one of the networks that it is made of. Software innovation the carriers never bothered to implement will be allow user-controlled settings to prioritize price, bandwidth, or latency, even routing different services over different networks so that data backup uses whatever is cheapest while gaming uses whatever has the lowest ping.

And that software/firmware-enabled flexibility is the key. Carriers today thrive on locking up users for years at a time. On Google Fi the carriers will have to compete every second of every day against the auction process embedded in Google Fi's firmware layer. And not just compete against each other, but also other forms of transit such as the Wifi, Facebook's drones, and the balloon and satellite technology mentioned above. Every ounce of margin is going to be sucked out of their business, eventually resulting in lower prices for consumers. And of course Google's partners Sprint and T-Mobile have to know this, but at this point they've got nothing to lose.

Eventually the carriers as they exist today will die away as consumer-facing brands. They might hold on to enterprise service business, or they might not. Hard to say. The network of towers they've build though will slowly become the dumb pipes that they are, and subject to unrelenting moment-to-moment competition. Depending on how Google Loon, SpaceX, or Facebook's drones interact with Bitcoin-powered mesh networks I'm not even sure they'll survive as firms. The "bright side" scenario for them is that mesh networks don't succeed and they become business-facing network managers who only earn whatever profits their spectrum ownership allows for. The "not as good" scenario is that mesh-networks do succeed, and they transition to an existence similar to Linksys or ASUS, just selling rebadged OEM hardware to end-users.

Expect Apple and Microsoft to eventually make similar services for iOS and Windows. Apple loves controlling the consumer experience, and this will only make it easier. Microsoft will eventually follow Google and Apple to maintain feature parity, as it usually does in mobile. At that point the mobile-OS companies will have taken full control of their customers, the orifices will be declawed, and consumers should win in the sense that they'll get more bandwidth for less money.

UPDATE: Byrne Hobart sent me the link to this Joel on Software Strategy Letter V. A key quote from the letter is:

Smart companies try to commoditize their products' complements.

Indeed. Google's key product is Search (Advertising), and its complement is bandwidth. The profitable mobile carriers (Verizon and AT&T) are resisting being commoditized, but Sprint and T-Mobile are willing to risk it. Google Fi, by making bandwidth "just work" for the consumer commoditizes the connection. Google Loon and the other initiatives mentioned above will continue that trend.

Wednesday, April 22, 2015

ULA's Integrated Vehicle Fluids

Frank Zegler of ULA provides some background here, and with a follow-up here, on the development of their Integrated Vehicle Fluids (aka, internal combustion engine (ICE)) for the Centaur upper stage. A question and answer at Stack Exchange explains the practical benefit.

This is really great stuff, and may be what keeps ULA "in the space business game" once the expendable Atlas rockets are retired. No one else has technology like this that I'm aware of, and ULA's in-space "rocket trucks" could become the deuce-and-a-half of the coming space age. The increased mission duration and number of burns is particularly relevant for anything beyond LEO - such as missions to the Moon or an asteroid.

The shape of the business in the 2020-2030 range seems to be: SpaceX launch, ULA cargo movers throughout Cis-Lunar space, and Bigelow-leased campers and destinations (passenger cars, space stations, and lunar/asteroid habitats). Even companies developing their own in-space technology (such as Planetary Resources) might decide to not duplicate R&D efforts and rely on ULA for moving things around.

Also, take a moment to appreciate the irony of the fact that Elon Musk (by making ULA's Atlas rockets uncompetitive in the launch market) has forced them to create a new market for internal combustion engines.

Into the Black

A post from Centauri Dreams explores what a nomadic colony in the Oort Cloud might look like.

The tl;dr to his idea is: small bands (~25 individuals) tending to really big mirrors collecting starlight for energy, traveling in groups that total ~500 individuals spread out over an area the size of the continental United States. (It has to be this large because starlight in the void between solar systems is so diffuse that the solar energy collectors are enormous)

Personally, I think this sounds like a lousy idea.

Socially, small bands of 25 people may get along, or they might all kill each other. Humans need a bit more space than that. Further, such small groups would require that everyone be an extreme generalist so that any one person's death doesn't result in the death of the colony (you never want to hear "Only Bob knew how to fix the air regenerator!" 10,000 AU from Earth's atmosphere). Modern human society is only possible with specialization, so I'm not even sure this level of generalization is feasible.

(Of course these future space colonists would have access to very sophisticated and intelligent software that could probably talk them through repairs, but that presents a deeper risk if no one on the colony really understands personally how the systems work or interact)

Also, to protect the colony from interstellar radiation, you're going to need a very thick water-ice jacket. There's plenty of water in the Oort Cloud, but internal volume, mass, and surface area all have fixed ratios. If you're trying to accelerate a volume big enough for 500 people out of the solar system, a single volume with a single water-ice shell will have much lower mass than many smaller volumes each having their own shell.

From a colony safety point of view of course there's a risk to having everyone in a single volume, but the tradeoff is you have more people and resources in one place to respond to emergencies too. And bigger systems have bigger buffers in almost every parameter, ceteris paribus.

Of course, what's the point of being out in the Oort Cloud in the first place? It's so far removed from planetary masses and solar energy that no civilization will ever be as comfortable there as it would be inside Jupiter's orbit. The main asteroid belt make sense as a destination for permanent settlement - the Oort Cloud, less so.

As I see it, there's only two reasons to be in the Oort Cloud: to collect resources for use by the solar civilization, or as a stepping stone towards the next solar system. And neither of these cases require long term, fully sustainable colonies. The first one requires the deep space equivalent of an offshore oil and gas platform, and the latter the deep space version of a cruise ship. Both uses have predictable mission lengths, and thus huge solar collectors are unnecessary. Just bring a nuclear battery.

What might a colony ship look like? Well, if resources are scarce or expensive in the solar system, a group of colonists could put together a mission that only has enough fuel and material to reach the Oort Cloud, and then scavenge an Oort Cloud object to provide the rest of what they need. A single ship, surrounded in a thick water-ice shell, with a nuclear core keeping everything warm. Rotate the thing to add gravity. A few hundred colonists could then use the deuterium from the comet to accelerate their little seed pod towards the next star, where they could then settle or resupply and keep going. The nuclear battery would run out of fissionable elements eventually of course, but you just plan for that and make sure you have twice as much as you think you need to make it to Alpha Centauri.

Sound risky? Sure, but then so were the wagon trains to California. Or the rafts used to colonize Polynesia. People have taken greater risks in the past to find a new home, and they will again in the future. And at least these colonists can pack tens of thousands DNA samples with them to avoid inbreeding en route or at their destination.