Monday, March 30, 2015

DARPA to the rescue

The NASA budget for the last seven years has been around $18 billion, give or take a couple hundred million. It's not huge as a percentage of government spending, but it's not a small amount of money either.

Despite that, NASA has not been able to build a replacement for the Space Shuttle, leaving the USA at the mercy of Russia for reaching the International Space Station. This is largely a story of Congressional interference, which has forced NASA to waste money on building the horrendously expensive SLS rocket and Orion spacecraft despite there being no mission for them, while real missions and good science (and funding for simply buying launch capacity and crew services from  American commercial firms) goes unfunded or gets cut.

The basic reason this happens is because Congress cares about NASA's inputs (money spend in their district, jobs created in their district, etc.) and doesn't care nearly as much about the results produced (tons of cargo delivered to orbit). A results-oriented approach can produce much more than NASA (which is producing zero, in terms of launch capacity) for much less money.

Enter DARPA.
Prabhakar said it is essential for the Defense Department to reduce launch costs and increase launch frequency. “Where we are today is that it takes years to schedule a launch and billions of dollars to put anything of substance on orbit,” she said. “Huge shifts are going to be needed there, I believe.”
Notice to the key terms - flight cost, and flight frequency. The SLS rocket that NASA is spending all its resources on is neither cheap nor expected to fly often. And the United Launch Alliance, the private-sector joint venture between Boeing and Lockheed Martin which has an effective monopoly on providing the military with space access thanks to its political pull in Washington D.C., is not a whole lot better. Why lower costs when there's no competition?

DARPA is part of the DoD, and ultimately the military does care about results. They want their satellites in place on schedule so they can carry out their military missions. It seems that their patience with the Congress-managed space programs at NASA and ULA is starting to wear thin.

Pictured Below: A DARPA rendering of a potential small-satellite launch system using a military jet as a first stage.

Saturday, March 28, 2015

ISS 2? I think not

Apparently some Russian bureaucrat is is talking up the possibility of NASA and RFSA collaborating on a successor to the International Space Station, to fly in some unspecified future. What NBC News is too polite, or too invested in the appearance of neutrality, to say is that this is utter poppycock.

The quote provided from NASA Admin Charles Bolden, that "there are some areas that are better suited to commercial companies", is a reference to Bigelow Aerospace. Bigelow is building space stations, and their prototype was launched way back in 2006. A successor test unit followed in 2007, and it was also successful. This year (2015) Bigelow will launch the Bigelow Expandable Activity Module to dock with the ISS and add more interior room to space station for the first time in years.

Bigelow has bigger plans than selling add-ons to ISS too. Their BA 330 space habitat has (wait for it...) 330 cubic meters of internal volume. (You'll never guess the internal volume of their proposed BA 2100 module). For comparison the International Space Station 916 cubic meters of internal volume, but where the ISS took ten years and cost $150 Billion or more to build (or about $20 million per 1 person per 1 day), a BA 330 will go to orbit in a single rocket launch and Bigelow will lease space on it for prices starting at $25 million for two people for two months (a 120-fold cost improvement). (And I should note that Bigelow's lease rates are not its costs. Bigelow has spent decades developing this technology, and it needs to recoup that investment. Prices will only fall towards costs once competition appears. As long as Bigelow is the sole provider, expect them to make bank.)
Sidebar: Why hasn't Bigelow launched a BA 330 yet if they had working prototypes 10 years ago? Simply put, they mis-timed the market. Bigelow builds the stations, but not the rockets that get people to and from orbit. Bigelow thought that companies like Scaled Composites (that won the Ansari X Prize in 2004) would have reached orbit by now. Obviously that has not occurred, and if Bigelow wasn't shielded by the immense wealth of its billionaire founder, they'd probably be bankrupt by now. But luckily for them it seems that SpaceX's Dragon will be fit for crew soon, and Bigelow will finally have a mode of transport to orbit for its customers.
Anyway, yeah, NASA won't be doing another ISS with Russian cooperation, not when an American company can lease them space station access for a tiny fraction of the cost. Both patriotism and financial prudence says that Bigelow is the near-term future for American's spending time in orbit. Whenever NASA decides the ISS cannot be maintained any further, it will be de-orbited and there will be no government-built successor.

Pictured Below: A rendering of several BA 330 modules connected by hubs. As pictured, this station would have more than twice the volume of the current ISS. There's no way to know if this particular configuration will be built, but it it were it would take about 4-5 months worth of SpaceX's launch manifest to put into orbit and cost 1% of what the ISS did.


Is this the end for internal combustion engines?

Sakti is building a solid-state lithium battery (sans ion) they say will have a power density of 1100 Watt hours per liter and will cost $100 per Kwh or less.

This is no ordinary claim. If they can deliver, it's an inflection point.

The power density is very good. It's more than double lithium-ion batteries, so your mobile devices will get even smaller. (Of course I'd like them to stay the same size and last for days, but for some reason phone manufacturers don't seem to ever provide that option).

But the price is the big deal. Tesla is building its gigafactory in the hopes of using economies of scale to drive lithium-ion costs to $200/Kwh or a little less. Sakti's battery would be half that number. Moreover, there's comparison with internal combustion engines, which Brad Plumer wrote about at the Washington Post.

Check out this infographic, which is so incredibly helpful at showing the dynamic relationship between battery costs and gasoline costs:

In gasoline cars, both the engine and the fuel are expensive, but the cost of the gas tank is an afterthought. In electric cars, the electricity and the engine are relatively cheap (your kitchen appliances have the same kind of engine, just smaller) but the batteries are expensive. So the cost of the battery determines how competitive the electric car is compared to internal combustion engines.

But look that that chart. It doesn't even go to $100/Kwh. Back in 2012 when the article was written, a battery cost that low was too absurd to contemplate. But you can follow the angle of the line in your imagination, and you can see that gas would have to be under $1.50/gallon to compete with Sakti's battery technology. If Sakti fails, Elon Musk's gigafactory should make electric cars just about even with internal combustion engines. But if Sakti succeeds, electric cars will be decisively cheaper.

Sakti claims to be within a year or two of commercialization, and they're apparently close enough that Sir James Dyson has invested $15 million in them so that he can power his growing tools and appliance empire. Car companies like GM have put money in too. Fingers crossed.

Biology & Moore's Law

What does it mean that biology has become an information science?

With the success of the Human Genome Project, biology moved from an empirical science of observation (as Gregor Mendal and Charles Darwin practiced) to an information science. We began to have the computable information necessary to analyze and study biology. The nucleobases of DNA and RNA are something a computer can easily understand, record to memory, and work with.

Moreover, the technology surrounding biology has started to fall in price while increasing in power, just like any other computer chip. The price of reading a human genome has fallen from billions to a less than $1,000 in less than 20 years. Synthesizing DNA is more expensive than just reading it, but it also is falling in price as similar rates. Modifying DNA in vivo is becoming possible, and will get cheaper.

But what else does it mean?

Read this article about improving the photosynthesis of crop plants. Soon we could have Rice 2.0. Then Rice 2.1. Then maybe we experience a couple revolutionary improvements in between growing seasons and farmers skip right to Rice 4.0 the next year.

Biology becoming an information science, where we can write data as well as read it, isn't just about predicting. It's about improving. It's about actual biological things (plants, animals, microbes, algae) getting upgraded out in the real world at the speed of research.

In other news, Malthus is still not terribly relevant to human affairs.

Google is losing the self-driving car race

Google has been working on a self-driving car for a while now, but it's now obvious they will not be the first to market, and I expect that within a year (or two, at most) they will abandon most of the technology they have developed and either not go into this business at all, start over, or license technology from another player. This is not the prediction I would have made at this time last year, but it appears that Google did not learn the lessons from recent history.

This video is about 27 minutes long, but is fascinating throughout. It details the recent history, present, and near future of the Mobileye EyeQ computer chip and computer vision system. Of course this is just one chip, presented by the company that sells it, but performance has been and is being demonstrated in real world scenarios. We can be fairly confident in the honesty of the claims.


Self-driving cars made the leap from science fiction to modern fact after Stanford University's "Stanley" won the DARPA Grand Challenge, the DARPA project to encourage research into autonomous vehicles. 

Although the Grand Challenge was nearly eight years ago now, what stuck with me is that there were two approaches taken by the teams that competed in it. One approach was what I called "plan and prepare". They studied the course for the challenge ahead of time, planned out a route, and programmed their cars to follow that route. The other approach I called "see and react". They didn't approach the course with a plan, but rather with a GPS guidance system and a bunch of rules like "drive towards your destination, but not into trees". 

As it happened, the Stanley car that won the race used the "see and react" system, but the "plan and prepare" cars did manage to finish. However it seemed obvious to me from that moment that only "see and react" systems, or "observant, rule-based systems", could scale and adapt to all of the driving situations the world had to offer. Even the best map in the world would fail the "plan and prepare" car at the first accident, fallen tree, or change in traffic patterns. And that's ignoring the issues of how you plan and prepare for private roads, or informal roads in Ecuador or Indonesia, where your mapping cars never drive.
Sidebar: I don't mean to criticize the teams that developed their cars using the plan and prepare method. It was still excellent engineering. And after all, they weren't designing their cars to drive around the world, they were only focused on winning the Grand Challenge. Their methods worked, and almost won. And they were facing the real constraint of the state of art in computer chips at that time. Visual processing is hard, and running a real-time computer vision system within the power budget of a standard passenger car was impossible. Stanley, the winning "see and react" car, had a server rack in its trunk paid a heavy weight penalty to include the batteries needed to run it. It made total sense from the perspective of winning the Challenge to move the heavy computing to a server that wasn't on the car, and keep the car as dumb (and lean!) as possible. But in the long term it should have been obvious that this power and processing constraint was a temporary one, as Moore's Law eventually makes all such constraints trivial for a fixed amount of processing work.
Now, since the Stanford team that won the Grand Challenge used the "see and react" method, and that team was headed by Sebastian Thrun, and Sebastian Thrun went to work for Google as part of their self-driving car effort, I assumed that Google would continue using this method. But that was my mistake, as it appears they have not. 

As this article describes (it gets other details wrong, but is correct so far as Google's efforts are concerned) Google has built an exceptionally detailed version of Google Maps for the areas in Northern California near their headquarters, and their self-driving cars use these maps for driving. Essentially they use very expensive LIDAR sensors to map the 3D space the car is driving in, compares it to the map in Google's massive cloud computer, and then cautiously makes it way through the world based on that. And to be clear, that map is accurate to the fraction of an inch.

What we have here is the world's biggest "plan and prepare" system. Google is sensing the roads ahead of time, using its server farms to analyze that data, and then telling the car what to do. The car itself is fairly stupid, its sensors are expensive, it reacts poorly to changes in road conditions, and it cannot drive any further than Google's Map. It cannot drive to New York, or probably even much further than Las Vegas. Certainly not to Guadalajara. 

The EyeQ chip, on the other hand, uses regular camera sensors (like you find in your cell phone) and extracts all the information it needs from that visual data, then in real time determines where it should be driving and what it should avoid. This system is working today, and Tesla will deploy it to regular customers this summer. 

So to sum up, Google's system cannot scale geographically further than its Map, it cannot adapt well to changes in road conditions (like bad lighting or snow that covers lane markers), and it uses really expensive LIDAR sensors. Meanwhile the Mobileye system (which Tesla is deploying already) can drive on any road, reacts well to changes, and relies on cheap (and getting cheaper all the time) camera sensors. And Mobileye's system will only get cheaper and better as its chips ride Moore's Law and the camera sensors are driven by the economies of scales found in the smartphone market.

Which leads me back to my original conclusion: As surprising as it seems, Google is not the leader in self-driving car technology, and it's hard to see how they become so without a massive effort starting almost entirely from scratch.

Wednesday, March 25, 2015

CRISPR is coming

The CRISPR revolution seems to be here, is this the coming of eugenics?

From Marginal Revolution, Tyler Cohen remarks on CRISPR, the breakthrough technology which allows us to modify DNA in vivo:
I believe the implications of all this — and its nearness to actual realization — have not yet hit either economics or the world of ideas more generally. This is probably big, big news.
I concur. More deeply, we could be moving to a new form of evolution, where genes can be inherited laterally across lineages, just like ideas can. Rather than a new mutation (or old mutation, newly discovered) having to displace old populations, the mutation can be voluntarily adopted by existing populations, much like converting to a newly popular religion.

In many ways, this is to be welcome. There are many genetic sequences in the animal kingdom which would be similar to super-powers among humans, such as the naked mole rat's complete immunity to cancer or the opossum's immunity to poison. Even if we don't want to adopt these mutations into our own genome, we could edit common pond algae to cheaply produce the relevant proteins in abundance for when we want to use them medicinally. 

We will also be entering a dangerous phase of our species' existence where independent labs can modify animals or plants and release them into the wild. Think about the introduction of rabbits to Australia, or other invasive species in other parts of the world. Soon the invasive species will be man-made. There will be instances where this is beneficial, but we should also assume there will be instances where it will not.

Much to consider.

Tuesday, March 24, 2015

The costs of reaching orbit

This is well-trodden ground among space buffs, but it's an important topic that must be understood and internalized before you can understand what is (and is not) important in space technology news.

Most people think reaching space is really, really expensive. And it is, currently, because of how the U.S. and Soviet governments designed their space programs in the post-WWII Era, but it doesn't have to be. Reaching space can be pretty cheap under certain circumstances.

Like any business, space launch and in-orbit activities have costs that recur at certain intervals. One of those recurring costs is fuel, but honestly, rocket fuel isn't very expensive on a per/unit basis. It's just water that's been broken down in hydrogen and oxygen using electricity and then chilled down to liquid temperatures. This is basic chemistry and the materials are available in industrial quantities at low cost.

Another recurring cost in space flight (and this is where the stupidity of government programs becomes obvious) is the rocket. The workhorse Gemini, Apollo, Delta, and Atlas rockets that launched our Moon program, planetary exploration craft, and military satellites are all thrown away after one use. They're based on the military rockets first developed by Nazi Germany and then further developed by the US and the USSR for their ICBM weapons. For military rockets, single-use makes sense. No one expects to recover hardware from the epicenter of a nuclear blast. But for a vehicle that is flying two and from peaceful locations with non-exploding cargo, expendable rockets are ludicrously expensive and wasteful.

So is reaching space expensive? You bet! But air travel would be expensive too if we scuttled a 747 after every one-way trip.

Of course the rocket scientists that developed NASA's space program for the Apollo missions knew this would be wasteful and expensive. They weren't idiots. The problem was that Apollo wasn't a sensible infrastructure program like the Hoover Dam. Apollo was a political statement, and a race to beat the USSR to the Moon. The guys at NASA had to choose between using the better-developed one-use weapons rockets, or developing new rockets that could land, be refueled, and take off again like a 747. The latter would produce cheap and sustainable access to space, but the former could be done cheaper and more quickly within the timeline set down by Kennedy for reaching the Moon, so the former they did.

NASA attempted to correct those mistakes with the Shuttle program. At first they wanted to create a "space plane" that would fly and fly again like a 747. Unfortunately what was developed was far away from that. The Space Shuttle orbiter threw away its main fuel tank, dropped its boosters into the ocean (which required their being retrieved and then refurbished at great expense), and had a heat shield that required significant maintenance between each flight (and was still responsible for destroying two of the orbiters). As a result of these issues, the Space Shuttle was far from the cheap, reusable craft it was first hoped to be. The Space Shuttle program is estimated, over the course of its existence, to have cost between $1 and $2 billion per launch.

Besides the cost of fuel and throwing away the rocket, there are two other sources of cost that have historically driven the cost of space flight higher than even their orbits.

The first is the cost of ground operations. To use air travel as analogy again, this is the cost of running the air port, having personnel on hand to fuel up, repair, maintain, and prep vehicles for use, and the cost of air-traffic control.

Ground operation costs are largely fixed, and thus their cost per-flight is driven by flight rate. At a busy airport like O'Hare, the cost of the ground crews and air traffic control tower is shared between the hundreds of flights that go in and out daily. At Kennedy Space Port, where the old Shuttle program flew one or two times per year, the cost of keeping personnel on hand between flights is quite burdensome. Everything else being equal, a rocket program with a high flight rate will be cheaper per/unit of mass delivered to orbit than a program that flies infrequently.

The last source of costs is the cost of designing and building the vehicles themselves. Unfortunately for most of its history, NASA has worked on a cost-plus model, where NASA says what they want and then pays whatever it costs to build that, plus a fixed "profit" margin to the contractor. This system produces zero incentive to lower costs, and in fact provides the opposite incentive. The further a contractor can drive up costs, the bigger their "plus" profit.

So, to recap, the four costs of space flight are:

  • Fuel
  • Single-use flight profile
  • R&D/Manufacturing (the rocket itself)
  • Ground operations
The cost of fuel is largely the same for everyone. It's an industrial product produced by a competitive market. It's also less than 0.2% of the current cost of reaching space.

But what about the other three? Well we can sort of guess how much they contribute to the cost of space flight by examining certain statements from SpaceX, an American space company that has refused to participate in the cost-plus model and is working on reusability. 

The Delta IV and Atlas V are current US-based competitors to the SpaceX Falcon 9. They were built using the cost-plus model of development. I made the following little table to compare costs; I have also included the Falcon Heavy (set to fly this year) and the Space Shuttle (for historical comparison). 

Rocket         kg-2-LEO   $/Launch    $/kg-LEO
Falcon 9        13,150    $   61 M    $  4,500
Delta IV Heavy  28,790    $  435 M    $ 15,109
Atlas V         18,810    $  226 M    $ 12,014  
Falcon Heavy    53,000    $   85 M    $  1,600
Space Shuttle   24,400    $1,000 M    $ 40,983

As you can see, abandoning the cost-plus model and vertically integrating their manufacturing of rocket components has produced a significant cost savings for SpaceX. And this is still with their throwing the rocket away. There is no other American company that comes close to SpaceX's cost, and the one that comes closest (the Atlas V) actually buys their rocket engines from the Russians. 

So the cost of reaching orbit on a Falcon Heavy is just $1,500/kg even while throwing the rocket away, but the rocket is still really expense. If we could achieve reusability, the cost for SpaceX to reach orbit would come down at least a factor of 10, or $150/kg for the Falcon Heavy. And that's at the current flight rate! If the price falls 10x, we should expect demand to increase, increasing flight rate, and lowering the price further. A virtuous cycle would fall into place, and it's not entirely obvious where a stable price floor would be.

By comparison, a first class airline ticket from New York to Australia costs about $50/kg, only a factor of three away from where the Falcon Heavy could be 3-4 years from now.

So is space expensive to reach? Yes, but only at the moment. With reusable rockets and high flight rates, it really doesn't have to cost any more than jet travel. And thanks to SpaceX's recent efforts, we aren't that far away from seeing it happen. 

Monday, March 23, 2015

The Apple Watch will fail

Five years from now, Google Glass (or a similar HUD, perhaps with tech borrowed from Magic Leap and Leap Motion) will be the primary interface for early tech adopters and Apple Watch will have been shown to be a flash in the pan/hobby business for Apple. Apple Watch may still exist, and perhaps do a brisk business by the personal chronometer industry's standards, but it won't move the needle for Apple any more than Apple TV has to date. In fact, Apple TV has a much brighter future than Apple Watch.

It's all about the user interfaces. Smartphones are a compromise between portability and screen size. A compromise that Apple got wrong for years I might add while Android ate their marketshare lunch. The iPhone 6/6+ are finally about right, but this is the peak of the smartphone era.

Meanwhile the Apple Watch goes the wrong direction on that compromise. It's very portable but will be useless for most UI tasks. Just ask yourself, if you're going to get an SMS - do you want to look down at your wrist and then have to use a scroll wheel to see all 140 characters, or do you want it to hover in your field of view, together with useful meta-information like contact details and your calendar? 

A good HUD gives you the same portability as a watch (it's wearable) and more apparent screen size than any smartphone ever can. It can be your entire field of view even.

Socially we will adapt. Next gen models will just look like sunglasses with thick ear booms. The cameras and microphones will be pinholes. More of the electronics will be hidden behind the ear, like those big wrap-around hearing aids.


Expect male techies (who haven't already gone shaggy) to grow their hair out a bit to hide the battery packs behind their ears. Women already have that covered and you won't see a change except in hair styles being more often down than up. Contact lenses will become even more popular so that people can take their glasses off when they want to indicate "off the record" intimacy.

----------

Related Posts (Examples of Wrong): How The Apple Watch And iPhone 6 Plus Might Flip Your Mobile Computing Habits