Tag

technological innovation Archives - FreedomLab

Value based healthcare

In an attempt to improve the quality of healthcare services and curb rising costs, a transition towards socalled value based healthcare is set in motion. This new paradigm is all about rewarding healthcare providers for their actual contribution to peoples health, instead of paying them for whatever they do, irrespective of the outcome. This transition calls for institutional innovation, e.g. a shift of risks from insurers to doctors as well astechnological innovation towards highly interoperable data systems across the sector. Because of these challenges, and more fundamental objections against the paradigm, it is no wonder that the transition is only moving ahead slowly.

 

Our observations

  • The concept of Value Based Healthcare (VBHC) was popularized by Harvard professors Michael Porter andElizabeth Teisberg in their 2006 book Redefining Healthcare. Their main critique of the healthcare sector was that there was no system of outcomes measurements to monitor the actual value of treatments and hence that a basis for genuine and meaningful competition between providers was missing.
  • The basic idea is to reward healthcare providers for the health gains they deliver instead of simply paying them for the services they provide. Not only does this, in theory, reduce the incentive to provide ever more care (and bigger bills) it only opens doors to cooperation between providers to jointly provide the best (and most cost-efficient) care possible between them.
  • There are roughly two options to organize VBHC. One is to bundle multiple services required by a patient in the case of a specific condition (e.g. tests, treatments, follow-up checks) and to reward a (group of) provider(s) according to the outcomes of this bundle (i.e. a so-called episode of care). The other option is “capitation” and entails healthcare providers taking responsibility for all the health needs of a group of people for a fixed fee per capita. In both options, provider(s) who save costs, while maintaining or improving the outcome, get to keep (a part) of the savings.
  • Countries across the globe have started to experiment with VBHC and introduced policy incentives to stimulate its implementation. While many stakeholders claim to work on this basis (e.g. commercial insurers in the U.S. claim that over 60% of their claims is part of a VBHC contract), in practice change is much slower as VBHC is only introduced in mild forms (with providers taking few to no risks) and mostly related to specific conditions for which the effect of treatments is highly predictable. One organization claims that true VBHC payment systems only account for some 4-6% of the U.S. healthcare system (and the Netherlands is also slow to adopt).
  • Actual proof of cost savings is scarce. A recent meta-study concluded that, within the American Medicare system, value based healthcare only resulted in measurable savings for hip and knee replacements(1.6% lower costs from 2013 to 2016). CMS (the Medicare administration) claims annual savings of $739million due to VBHC-like initiatives. Another, non-academic, study concluded that U.S. healthcare payers who implemented VBHC realized 5.6% savings.
  • The Dutch diabetes clinic Diabeter is a much-applauded example of VBHC; an patient-centred model in which all care is organized around the patient (instead of a patient being sent from provider to provider). DIabeter is among the best performing clinics and realizes cost savings as patients spent less time in (expensive) hospitals.

Connecting the dots

At its core, the shift to Value Based Healthcare is a shift from processes to outcomes and a shift from medical gains in the narrow sense to health gains in the broadest sense (including quality of life as perceived by the patient). Together, these are expected to lead to higher quality of care and lower costs as a result of better cooperation between healthcare providers (e.g. within or between hospitals) and quality-improving and cost-saving innovation.

While it may sound rather simple, the successful introduction of VBHC requires a paradigm shift in the way the healthcare sector deals with the division of tasks between stakeholders. This is especially true for the relation between the payer, typically the state or an insurance company and different kinds of health care providers. Inthe traditional services-based model, the healthcare provider is free (roughly speaking) to choose whatever test or treatment deemed necessary and costs are reimbursed by the payer. This implies that the risks of additional costs, for instance due to complications or the occurrence of multiple conditions at once (i.e. comorbidity) aresolely with the payer. Hence, there is no incentive for doctors to consider the most cost-effective options. In the VBHC model, risks are partially shifted to providers as they are expected to help the patient against a fixed fee, even when additional treatment is necessary. At the same time, they stand to benefit from relatively “easy” patients and smarter and more cost-effective ways of helping patients. The latter relates strongly to the fact that, in the new paradigm, healthcare providers are incentivized to work together to jointly improve quality and lower the costs. That is, fees are defined on a higher level of aggregation than today (e.g. for a pre-definedepisode or the entire healthcare needs of a patient per year) and providers need to distribute payments between them and, hence, they have a shared interest in improving quality (to get paid at all) and lowering costs. As part of this institutional paradigm shift, technology can play a crucial role to monitor health outcomes (i.e. both in terms of technologically measurable health and perceived quality of life of the patient) and to coordinate efforts among providers. This implies highly interoperable data systems and willingness to share data among providers and payers.

Exactly because this requires a paradigm shift, change is far from easy. Healthcare providers are reluctant (and often financially unable) to take on risks that are currently taken by insurers and governments. This is why VBHC programs are still limited to conditions for which risks are wellknown and limited and most (or all) care can be provided by a single organization. More complex conditions such as heart failure, are far less likely to see VBHC contracts in the near term. More and better data, and hence better risk assessments, may change this in the future, but there is a bit of a chicken-and-egg dilemma as better data is most likely to result from VBHC programs in the first place (i.e. joint efforts to monitor a patient’s progress across multiple providers and over longer time spans).

There are also several moral objections against the concept of VBHC. One is that it is unclear what “value” actually means, for example, the monistic business concept of value neglects personal values, and who is entitled to define the concept (e.g. a doctor measuring health or the patient experiencing health). Another, often-voiced, concern is that VBHC tends to undermine solidarity in the healthcare system. Healthcare providers will be reluctant to take on high-risk patients whom they are unlikely to cure within the pre-defined budget (e.g. because of likely side-effects). Although this is not allowed in most countries, instances of “patient dumpinghave occurred. Moreover, it is probably easier to force insurers to take on customers (as in most countries today) than it will be to force providers to take on patients (as they could, in some cases, argue that they would not be able to cure them). And last, VHBC presupposes a participatory informed subject making rational choices. Although research confirms most patients prefer to participate in their care process, it is still unclear to what extent this is desirable.

In response to these objections, mere cost-savings by reducing overtreatment may not suffice to convince the general public and healthcare providers (some of whom fiercely oppose VBHC for said reasons). A more convincing argument could be that prevention can play a bigger role in the healthcare system when both payers and providers have a shared financial interest in the general well-being (i.e. not getting ill) of “their population. This is certainly true in the case of capitation when providers take on the responsibility for the full medical needs of a group. It will also play a role for lifestyle-related diseases for which providers than have an additional incentive to make sure patients life healthier after they have been treated.

Implications

  • Despite the lack of evidence, VBHC is being promoted globally, e.g. in the Netherlands, the EU and China.Even though this may not result in radical cost savings (immediately) it is likely to lead to a surge in data about treatments and their outcomes (including patients’ own assessments) and this could form the basis of better (and possibly cheaper) treatments in the future.
  • VBHC could enable more investments in e-health solutions by healthcare providers (e.g. hospitals) as they could actually benefit from resulting savings (i.e. lower costs or health improvements) that end up with other providers or insurers in the current system.
  • One problem with VBHC is that it (implicitly) relies on the assumption that overtreatment is a significant problem and a major factor in rising healthcare costs. A recent paper argued that the real problem (in the U.S.) is that many pharmaceuticals are too expensive, administrative costs are running out of control and doctors are paid too much (in comparison to other developed economies. Big Pharma is already of out favor with the general public and doctors may face the same problem.

 

Artificial winter wonderlands

What happened?

Climate change is affecting winter sports regions as they can no longer count on the snow to fall in early December and last until the end of the traditional skiing season. Some areas have lost as much as 40% of their average snow depth over the last decades and at least 60% of slopes worldwide are lined with snow cannons to make up for too warm or too dry winters. In the coming years, these cannons, of which larger resorts need hundreds, will drive up costs significantly as their energy (and water) bill will continue to rise with climate change. At some point, however, these machines will not suffice anymore (they can only produce snow at temperatures close to 0oC) and resorts will either have to switch to even more expensive and energyconsuming methods, or, when this is no longer feasible, limit operations to a couple of months per year.

What does this mean?

In the United States, several lowerlying resorts have already shut down due to disappointing winters and investments are concentrating on high altitude resorts that are more futureproof. A further shakeout is likely in the coming two decades and, internationally, investors (e.g. in real estate) are also eying the highest of regions. Obviously, as snow becomes a scarce good,these regions will benefit from their unique position, but the entire industry will experience a decline; rising costs are already discouraging people from going on winter holidays (e.g. in the Netherlands) and, over time, skiing is bound to become a luxury only the wealthiest households will be able to afford once again.

What’s next?

Apart from rising costs, the environmental impact of winter sports is also growing and skiing could very well be among the next consumer practices that fall prey to the “shame” trend. In the short term, this will mostly relate to the direct impact of skiing resorts in the form of deforestation and exorbitant energy and water usage. In the longer term, people will likely travel farther to reach snow sure areas (e.g. in Canada or Japan), thus further enlarging their environmental footprint with their vacation. To prevent all too heavy backlash, most ski resort are trying to reduce the environmental impact of their operations, e.g. by using renewable energy, but their efforts are unlikely to prevent groups of consumers from developing skiing shame in the (near) future.

What to expect from battery development

The rise of electric vehicles and the energy system’s growing need for energy storage solutions have us craving ever-better and cheaper batteries. Currently, our smartphones, vehicles and home batteries make use of li-ion batteries and these have improved immensely over the past decades, but further gains in production costs and performance are more than welcome. There are, however, boundaries to what can be achieved with li-ion technology and a range of alternative chemistries are being developed. The question really is how far li-ion can take us and what we can expect from next-generation batteries.

Our observations

  • According to BloombergNEF, lithium ion battery costs have fallen by 87% from $1,100 per kilowatt-hour in 2010 to $156/kWh today. This is mostly due to upscaling and automating production. By 2023, costs may be as low as $100/kWh, but at some point, raw material costs will limit further reductions and, growing demand for these materials (e.g. lithium, nickel, cobalt) is likely to result in supply problems and volatile material costs.
  • According to the U.S. Department of Energy, a price of $125 per kWh is needed for electric vehicles to compete with gasoline or Diesel vehicles and estimates this threshold can be reached by 2022. The DOE and others (e.g. McKinsey) are somewhat more skeptical as to when the $100/kWh can be achieved.
  • Cost, however, is far from the only criterion. Important characteristics of batteries obviously includeenergy density (i.e. by weight and by volume), but also power density (i.e. the power they can deliver at any moment), lifetime and safety. Ideally, abundant, low-cost and non-toxic raw materials would combine all these properties. Unfortunately, there are many trade-offs between these characteristics.
  • Current and future supply problems have made reducing the cobalt content of batteries a key priority. By 2030, EV production is expected to have outgrown current cobalt mining and processing capacity by farand today automakers are scrambling to secure long-term supply of (responsibly sourced) raw materials. Tesla has pledged to eliminate the mineral in its next-gen battery.
  • More radical changes in chemistries (e.g. lithium metal, solid state, sodium ion, multivalent-based, and lithium sulfur and metal-air) are in various stages of development, but even when they are (technologically) “ready” for commercial use, they will have to go through (more or less) the same process of upscaling and learning.
  • Last month, (fuel cell) electric truck startup Nikola announced it is working on a revolutionary, but undisclosed, type of battery that will hold four times the energy of a li-ion battery, while costing only half as much. In recent history, we have seen a number of battery startups making similar claims, butthey have never delivered on their promise. These included Sakti3, which was acquired, and ditched, by Dyson, Envia, linked to General Motors and Belenos, a subsidiary of watchmaker Swatch. These companies seem to have in common that they made overly bold promises on the basis of lab-scale achievements that proved too difficult to scale up in terms of performance and costs.
  • Beyond transportation, the energy system is in dire need of low-cost, high-capacity energy storage solutions. Along with pumped hydro (which has been used for decades already) and new solutions such as compressed air or hydrogen, batteries will likely play a big role as well. BloombergNEF expects the storage market for batteries to grow from 17GWh today to 2,850GWh in 2040. This is a 122-fold increase and would require an estimated $662 billion in investment.
     

Connecting the dots

Any battery consists of two electrodes, an anode and a cathode, that contain electrochemically active materials. As the battery is used (i.e. discharged) particles (i.e. ions) move from the one electrode through the other and, in doing so, force an electron to go through an external electric circuit and power a device. In rechargeable batteries, the reverse process takes place when the battery is charged. The two electrodes are separated by an insulating material through which the ions can pass, in most li-ion batteries this is a separator film drenched in a liquid (the electrolyte). The composition of both electrodes and the separator material determines the characteristics of the battery and some combinations could, in theory, yield a superb battery. In practice, however, there are many challenges to actually making those batteries work for hundreds, or thousands, ofcharge and discharge cycles without substantial degradation or safety issues. And, when those requirements are met, the battery has to be manufactured at mass scale and against low costs, which, among other factors, rules out all too exotic and expensive materials.

One challenge is finding the optimal combination of materials is that all materials add specific characteristics and there are several trade-offs between them. To illustrate, an important group of li-ion batteries uses so-called NMC cathodes which are made from a mixture of nickel, manganese and cobalt in varying compositions. Roughly speaking, nickel adds capacity, manganese brings safety, and the amount of cobalt determines how fast a battery can charge and discharge. This, however, means that increasing capacity (i.e. more nickel, less of the others) necessarily comes at the expense of safety and charging speed. Other types of li-ion batteries include lithium iron phosphate batteries, which are relatively cheap and long-lasting, but low in energy density, and nickel cobalt aluminum oxide batteries (used by Panasonic/Tesla), which can hold a lot of energy, but are costly and less safe.

Decades of fundamental research and engineering have led to dramatic improvements in common li-ion designs. To illustrate, the first Nissan Leaf in 2011 was equipped with a li-ion battery with a capacity of 24 kWh. Increasing energy density has enabled Nissan to fit a 40 kWh battery in its similar-sized 2018 model. Yet, for this generation of batteries, the end is in sight as far as storage capacity goes. What’s left is reducing costs through further upscaling and automation of production. This will result in cheaper electric vehicles, but not necessarily in vehicles that can drive much farther on a single charge; any vehicle can only carry a battery of a certain weight and volume.

For genuine breakthroughs we have to look to other types of batteries that use different materials and principles. Some of these promise far greater storage capacities than li-ion (in terms of kWh/kg), but all of them still face considerable hurdles towards practical applicability and readiness for mass production. Among this new generation of batteries, solid-state batteries appear most promising from a mass-market perspective. These use a solid electrolyte instead of flammable liquids, which makes for a safer battery and because a solid layer takes up less space than a fluid one, it can also increase energy density. For these reasons, solid-state batteries are already in use in pacemakers and other critical applications, but as of yet they are too expensivefor EVs. One of the challenges is to find a means of applying an ultra-thin layer of solid electrolyte to the electrodes that that stays in place even when the electrolytes swell and contract during charging and discharging (which they do as ion shuttle back and forth). Depositing such a layer, atom by atom is technologically feasible, but extremely time-consuming and thus difficult to scale up.

Solid-state and other next-gen batteries have been in development for decades and progress is slow and fundamentally uncertain. In other words, no “miracle battery” is on the horizon yet and in any case, it will take at least another decade for any of these batteries to come to the market and move beyond specific niche markets where certain trade-offs are acceptable (e.g. with space applications, costs are less of an issue than with automotive uses). In the meantime, (likely) cost reductions will be key to mass scale adoption of electric vehicles and the use of (li-ion) batteries for stationary, grid-scale, energy storage. In terms of vehicle range, progress is most likely to come from further optimization of powertrains and overall weight reduction.

Implications

  • As far as transportation goes, vehicle ranges, on a single charge, are not likely to improve dramatically in the coming decade. For heavier vehicles (e.g. trucks and buses), other strategies may be needed to cover longer distances (e.g. fast-charging en-route, possibly while driving along specific stretches of road with overhead wires or induction chargers in the road surface). Limitations to battery capacity (and longer-term cost reductions) open up a window of opportunity for hydrogen (fuel cell) technology.
  • As we noted before, a lot of renewable technology relies on relatively scarce resources and there will be a scramble for metals such as cobalt and nickel. Ethical sourcing will also be increasingly important in this sector and this will further limit availability of raw (and processed) minerals. Potential beneficiaries include mining companies in politically stable and well-governed regions and battery recycling facilities.

Where the blockchain meets the real world

Blockchain technology promises to automatically process all sorts of digital transactions without mistakes and or possibilities for fraud. Not only does this make for reliable processing of payments or contractual agreements, it also ensures highly scalable and potentially cost-efficient processes that require no trusted intermediaries. Yet, these so-called trustless systems still need to be provided with reliable input and they need a means of executing their outcomes. In other words, the interface between the blockchain and the real world is of crucial importance and it is no wonder that many initiatives seek to develop reliable and scalable solutions to this.

Our observations

  • Developers sometimes refer to data and processes that take place inside the blockchain as on-chain, and to everything outside that realm as off-chain. We’ve previously noted that this interface is one of the challenges blockchain technology still faces.
  • In the public debate, blockchain technology is still mostly understood as an alternative means of settling financial transactions, such as with Bitcoin. In these cases, end users order a specific transfer of assets and the blockchain processes and stores the transaction in such a way that no one can alter or reverse it.
  • More elaborate applications include so-called smart contracts that execute predefined agreements between stakeholders. Typically, these smart contracts describe specific conditions that, when they are met, set in motion the execution of an agreed transaction (e.g. the unlocking of a bicycle in a bike-sharing scheme or the paying out of an insurance fee in case of extreme weather).
  • In the betting sector, several blockchain-based platforms offer a wide range of betting options that are processed automatically, from determining odds to accepting bets and paying winning bettors. These platforms obviously need reliable input, e.g. about sports results.
  • So-called oracles feed the blockchain with real-life data. This might concern (semi-)public data that can be extracted from some database through standardized APIs or more elaborate AI systems that search and verify data on their own (i.e. software oracles), but can also be obtained by sensors that collect data from the real world, such as a car entering a parking garage or a product moving through a logistical process (i.e. hardware oracles). Finally, there are also human oracles who can provide the system with data. Initiatives such as Chainlink and Provable offer an oracle-as-a-service by retrieving and evaluating data from different sources.
  • In theory, each of these types of oracles can be tampered with. Databases can be hacked and altered, sensors may be fooled and their signals forged and human oracles can have bad intentions themselves or be forced to provide false information. Because of this, developers of oracle systems will typically collect data from multiple sources and might rely on multiple sensors (even from different manufacturers).
  • Initiatives using the crowd to source input for smart contracts, such as Realitio, need to verify the quality of that data and these platforms either use a majority vote (i.e. the most frequently provided answer is accepted as true) or a system in which data providers risk a penalty (i.e. have “skin in the game”) when they provide false answers.

Connecting the dots

We have become used to using intermediaries to handle data, to process financial transactions (i.e. banks) and to verify and enforce contracts (i.e. lawyers and notaries). We have also grown used to trusting these and, to be sure, to devising ways of verifying their trustworthiness. For the most part, these trusted parties have served us well, but they are also costly, limited in their abilities (e.g. in terms of processing speed) and they may, despite all checks and balances, commit fraud.

Blockchain technology, and smart contracts in particular, offer an alternative in the form of trustless systems that can execute agreed-upon terms automatically and irreversibly without the interference of (human) intermediaries. Basically, a smart contract is a piece of code that contains the terms of a contract (typically in some kind of if-then expression) that will be executed once the predefined conditions are met. Given that this is code, and computers practically don’t make mistakes, the contract will always be executed.

Yet, it is exactly this promise of infallible and impossible-to-tamper-with handling of data and transactions, on-chain, that shifts attention to the imperfections of real-world information and transactions. For instance, in food value chains, blockchain could help to empower small farmers and ensure they get a better price for their produce (i.e. by recording how much a farmer was paid for his produce). While such data cannot be altered once it’s on-chain, it is very difficult to establish whether the farmer actually received the recorded amount of (cash) money (off-chain). Moreover, because there are no trusted intermediaries and everything takes place automatically, false input into a blockchain or failure to execute its output in the real world can spell disaster (e.g. automated payments on the basis of fake sports results).

There are basically two approaches to this problem. One might try to extend the reach of the blockchain by further digitizing the processes and bringing them on-chain. On the input end, software- and hardware oracles can, for instance, extract data from existing databases or generate data through sensors (e.g. smart cameras). On the output, or execution, end, all sorts of electronic hardware may be used to effectuate decisions reached on-chain. This may include smart locks, vending machines or even more elaborate robots. Still, bringing more real-life events on-chain through digitization merely shifts the problem. In the example of the farmer, for instance, he could be paid directly through the blockchain in a cryptocurrency, but this does not rule out the possibility of the farmer being forced to supply a disproportionate amount of produce or extorted to pay back (in cash) some of this money. Even further digitization could partially solve this problem, e.g. by digitally earmarking the farmer’s pay so that it can only be used to pay for food or rent, but, again, this would merely lead to an arms race and shift the problem further.

The other approach is to develop scalable ways of acquiring reliable data from human sources and to use humans on the output end as well (i.e. to follow through on decisions made on the blockchain). These may use as many human oracles as feasible and take the majority or average of their “votes” as input for the blockchain. They may also introduce elements of gamification and issue rewards and penalties to those providing good and bad data. Obviously, these approaches work best with rather generic data such as sports results or other data that many (human) oracles can acquire simultaneously. In the case of small farms and the prices of their crops, these approaches would not work, as only the farmer and the middleman know what price was paid. In edge cases like these, when digitization and crowd-sourced data don’t necessarily produce reliable results, and the fulfillment of conditions is difficult to verify, we may still have to rely on old-fashioned trusted intermediaries such as fair-trade organizations, lawyers and notaries. On the one hand this limits the scalability and added-value of blockchain solutions in these cases, yet, on the other hand, the use of blockchain technology may still add value in these cases as it introduces more transparency and, at least, makes for more efficient and reliable processing of data, even when that data itself is not flawless.

The bottom-line here is that blockchain-based solutions still hold great promise, but also that its most valuable applications are cases in which digitization is almost complete (e.g. in media, gaming or some parts of finance and public administration) or data is generally available.

Implications

  • While a blockchain network on its own has no single point of failure (because the data is stored on so many computers), any oracle presents a vulnerability. A provider of external data might choose to falsify it, or a third party may interfere to tamper with data or sensors.
  • In the end, for smart contracts to be meaningful in the real world, they should have legal status, similar to regular contracts, in order for the judicial system to be able to uphold one’s rights (e.g. the police could be asked to step in to effectuate some decision produced by a smart contract). At the same time, the decentralized nature and broader ambitions of the “blockchain movement” somehow undermine the status quo.
  • As we noted before, given the deliberate ambiguities we find in current law and regulation, it is questionable whether this can and should be captured in code or smart contracts. This presents a limitation for the application of smart contracts, which are most suited for business agreements and relatively simple contracts (of which there are plenty).