WACKBANG

Picture 1

Scientists have found a new way to stimulate lucid dreams

Scientists have found a new way to stimulate lucid dreams A drug used to treat Alzheimer's can also be used to help users gain control of their dreams. August 21, 2018 5:24 PM PDT Lucid dreams, where dreamers become aware they are dreaming and take control of it, can be an incredible experience. The only issue: They're rare and difficult to stimulate. As a result researchers have spent decades trying to figure out different techniques to create a lucid dreaming experience like sleep interruption, and using different breathing techniques before sleeping.But scientists at the University of Wisconsin-Madison and the Lucidity Institute in Hawaii have figured out a more consistent way to create a lucid dreaming state, and it involves the use of drugs normally used to treat Alzheimers. The drug is called "galantamine". In addition to being used to treat Alzheimer's, it's also regularly used to treat muscular dystrophy and other disorders of the central nervous system. A study using lucid dreaming practitioners had some dramatic results. Without galantamine, using a placebo, 14 percent of users reported having lucid dreams. After a 4 milligrams dose of the drug that number rose to 27 percent.Incredibly, after an 8mg dose of galantamine, 42 percent of participants reported having lucid dreams. The study suggest galantamine's effectiveness might be "related to its effectsoncholinergic receptor activity during REM sleep." The scientists also suggest that the drug's propensity to increase memory function might also be part of the reason why users are more likely to have a lucid dream using galantamine. "Lucid dreams overall," reported the study, "were associated with significantly higher levels of recall, cognitive clarity, control, positive emotion, sensory vividness and self-reflection on one's thoughts and feelings compared to non-lucid dreams." Users of the drugs reported minimal side effects. Measure Measure

Picture 1

Physicists Think They've Spotted the Ghosts of Black Holes from Another Universe

Physicists Think They've Spotted the Ghosts of Black Holes from Another Universe By Rafi Letzter, Staff Writer | August 21, 2018 01:48pm ET An image of the cosmic microwave background Credit: ESA and the Planck CollaborationWe are not living in the first universe. There were other universes, in other eons, before ours, a group of physicists has said. Like ours, these universes were full of black holes. And we can detect traces of those long-dead black holes in the cosmic microwave background (CMB) — the radioactive remnant of our universe's violent birth. At least, that's the somewhat eccentric view of the group of theorists, including the prominent Oxford University mathematical physicist Roger Penrose (also an important Stephen Hawking collaborator). Penrose and his acolytes argue for a modified version of the Big Bang. In Penrose and similarly-inclined physicists' history of space and time (which they call conformal cyclic cosmology, or CCC), universes bubble up, expand and die in sequence, with black holes from each leaving traces in the universes that follow. And in a new paper released Aug. 6 in the preprint journal arXiv, Penrose, along with State University of New York Maritime College mathematician Daniel An and University of Warsaw theoretical physicist Krzysztof Meissner, argued that those traces are visible in existing data from the CMB. An explained how these traces form and survive from one eon to the next. [What's That? Your Physics Questions Answered] "If the universe goes on and on and the black holes gobble up everything, at a certain point, we're only going to have black holes," he told Live Science. According to Hawking's most famous theory, black holes slowly lose some of their mass and energy over time through radiation of massless particles called gravitons and photons. If this Hawking radiation exists, "then what's going to happen is that these black holes will gradually, gradually shrink." At a certain point, those black holes would disintegrate entirely, An said, leaving the universe a massless soup of photons and gravitons. "The thing about this period of time is that massless gravitons and photons don't really experience time or space," he said. Gravitons and photons, massless light speed travelers, don't experience time and space the same way we — and all the other massive, slower-moving objects in the universe— do. Einstein's theory of relativity dictates that objects with mass seem to move through time slower as they approach the speed of light, and distances become skewed from their perspective. Massless objects like photons and gravitons travel at the speed of light, so they don't experience time or distance at all. So, a universe filled with only gravitons or photons will not have any sense of what is time or what is space," An said. At that point, some physicists (including Penrose) argue, the vast, empty, post-black-hole universe starts to resemble the ultra-compressed universe at the moment of the big bang, where there's no time or distance between anything. "And then it starts all over again," An said. So, if the new universe contains none of the black holes from the previous universe, how could those black holes leave traces in the CMB? Penrose said that the traces aren't of the black holes themselves, but rather of the billions of years those objects spent putting energy out into their own universe via Hawking radiation. "It's not the black hole's singularity," or it's actual, physical body, he told Live Science, "but the… entire Hawking radiation of the hole throughout its history." Here's what that means: All the time a black hole spent dissolving itself via Hawking radiation leaves a mark. And that mark, made in the background radiation frequencies of space, can survive the death of a universe. If researchers could spot that mark, then the scientists would have reason to believe that CCC vision of the universe is right, or at least not definitely wrong . To spot that faint mark against the already faint, muddled radiation of the CMB, An said, he ran a kind of statistical tournament among patches of sky. An took circular regions in the third of the sky where galaxies and starlight don't overwhelm the CMB. Next, he highlighted areas where the distribution of the microwave frequencies match what would be expected if Hawking points exist. He had those circles "compete" with one another, he said, to determine which area most nearly matched the expected spectrums of Hawking points. Then, he compared that data with fake CMB data he randomly generated. This trick was meant to rule out the possibility that those tentative "Hawking points" could have formed if the CMB were entirely random. If the randomly generated CMB data couldn't mimic those Hawking points, that would strongly suggest that the newly-identified Hawking points were indeed from black holes of eons past. This isn't the first time that Penrose has put out a paper appearing to identify Hawking points from a past universe. Back in 2010, he published a paper with the physicist Vahe Gurzadyan that made a similar claim. That publication sparked criticism from other physicists, failing to convince the scientific community writ large. Two follow-up papers (here and here) argued that the evidence of Hawking points Penrose and Gurzadyan identified was in fact the result of random noise in their data. Still, Penrose presses forward. (The physicist has also famously argued, without convincing many neuroscientists, that human consciousness is the result of quantum computing.) Asked whether the black holes from our universe might someday leave traces in the universe of the next eon, Penrose responded, "Yes, indeed!" Originally published on Live Science.Measure Measure

Picture 1

America and China: Destined for Conflict or Cooperation? We Asked 14 of the World's Most Renowned Experts

America and China: Destined for Conflict or Cooperation? We Asked 14 of the World's Most Renowned ExpertsSourceURL: https://nationalinterest.org/feature/america-and-china-destined-conflict-or-cooperation-we-asked-14-worlds-most-renowned-experts From the diverse array of experts we assembled, we received responses from across the spectrum. Some think military conflict is inevitable. Some think there is no reason the two sides should not be able to keep peace. Some see China as a status quo power. Others see China as a revolutionary challenger. The following is each response in alphabetical order. (The views of authors expressed are their own and not necessarily those of their institution.) Click on the links below to go to each expert's response. Graham Allison (see below), Gordon G. Chang, David Denoon, Michael Fabey, John Glaser, James Holmes, Lin Gang, Kishore Mahbubani, Robert Ross, Ruan Zongze, Robert Sutter, Xie Tao, Xu Feibiao and Wang Jisi. Graham Allison, Author of 9 Books, most recently Destined for War: Can America and China Escape Thucydides's Trap?. He is presently the Douglas Dillon Professor of Government at the Harvard Kennedy School: Relations between the U.S. and China are destined to get worse before they get worse. The underlying reason is Thucydides’s Trap. When a rising power threatens to displace a ruling power, alarm bells should sound: extreme danger ahead. Thucydides explained this dangerous dynamic in the case of Athens’s rise to rival Sparta in classical Greece. In the centuries since then, this storyline has been repeated over and over. The last 500 years saw sixteen cases in which a rising power threatened to displace a major ruling power. Twelve ended in war. Unless Xi Jinping fails in his ambitions to ‘Make China Great Again,’ China will continue challenging America’s accustomed position at the top of every pecking order. If Xi succeeds, China will displace the U.S. as the predominant power in East Asia in his lifetime. Unless the U.S. redefines itself to settle for something less than ‘Number 1,’ Americans will increasingly find China’s rise discombobulating. As Thucydides explained, the objective reality of a rising power’s impact on a ruling power is bad enough. But in the real world, these objective facts are perceived subjectively — magnifying misperceptions and multiplying miscalculations. When one competitor ‘knows’ what the other’s ‘real motive’ is, every action is interpreted in ways that confirm that bias. Under such conditions, the competitors become hostage to third party provocations, or even accidents. An event as bizarre and otherwise inconsequential as the assassination of an archduke in Sarajevo in June 1914 forces one or the other principal protagonists to respond. In doing so, it triggers a spiral of actions and reactions that drag both to an outcome neither wanted. Candidates for that role in the current rivalry include not only Kim Jung-un but political trend lines in a democratic Taiwan, whose citizens have less and less interest in living in China’s Party-driven autocracy. Having been engaged in intense discussions with many of the leaders of both China and the U.S. over the past 14 months since publication of my book, Destined for War: Can America and China Escape Thucydides’s Trap?, my takeaway is that if Thucydides were watching, he would say both parties are entirely on script, accelerating towards a collision that would be as catastrophic as it is unintended. Escaping Thucydides’s Trap in this case will require a surge of strategic imagination as far beyond the current conventional wisdom in DC and Beijing as the remarkable Cold War strategy crafted by statesmen we now celebrate as the ‘wise men’ was beyond the consensus in Washington at the end of World War II. Gordon G. Chang, Columnist and author of The Coming Collapse of China: The United States of America and the People’s Republic of China have irreconcilable interests. As a result, these two super states are destined for intense competition and perhaps conflict. We call China “revisionist," but “revolutionary" is more precise. Chinese state media outlets these days, like in the 1950s and 1960s, carry revolutionary statements. China’s media now fawn over Xi Jinping’s “unique views on the future development of mankind." What is so unique about the views of the regime’s supremo? In September 2017, Foreign Minister Wang Yi, in Study Times, the Central Party School newspaper, wrote that Xi’s “thought on diplomacy" has “made innovations on and transcended the traditional Western theories of international relations for the past 300 years." Wang’s 300-year reference was almost certainly to the Treaty of Westphalia of 1648, now recognized as the basis of the current international system of sovereign nations. Wang’s use of “transcended" indicates Xi is contemplating a world without states other than China, especially because Xi himself often uses language of the imperial era, when Chinese emperors maintained that they—and they alone—ruled tianxia or “all under heaven." This tianxia worldview, increasingly evident in Xi’s and Beijing’s pronouncements, is, of course, fundamentally inconsistent with the existence of a multitude of sovereign states. The Chinese view, breathtakingly ambitious, unfortunately drives many of Beijing’s belligerent actions. Beijing leaders not only speak tianxia but act tianxia. They are, for instance, trying to take territory from India in the south to South Korea in the north. At the same time, they are moving to close off international water and airspace, a direct challenge to everyone not Chinese. They are supporting the North Korean nuclear weapons and ballistic missile efforts with technology, components, equipment, materials, and financial and diplomatic support. Almost every day, their media attack the concepts of representative governance and individual freedom. China’s rulers act with impunity, injuring American pilots and diplomats and harassing American ships and aircraft. They have seized an American vessel from international water and interfered with others. They steal hundreds of billions of dollars of American intellectual property each year. They ignore their obligations to other states while expecting other states to honor theirs to China. They are engaging in nothing less than an assault on the world’s rules-based order. For about 150 years, American policymakers have drawn their western defense perimeter off the coast of Asia. China each day seeks to undermine America’s friends and allies in East Asia and drive the U.S. away. That effort, of course, directly undermines American security. China’s challenge to America is across the board and therefore existential.David Denoon, Professor of Politics and Economics at NYU’s Department of Politics and editor of China, The United States, and the Future of Southeast Asia: Background: The current downturn in U.S.-China relations began in 2007. The George W. Bush administration was so preoccupied with Iraq, and the Middle East more generally, plus its frustrations with the Six Party Talks on North Korea, that it failed to respond adequately as China became more assertive in the 2007-08 period. In that period the Chinese government recognized that it could exert pressure on its neighbors without producing a sharp response from Washington. The initial signs of Chinese aggressiveness were in harassment of the Japanese over territorial claims in the East China Sea and the Senkaku/Diaoyu Islands. The Obama administration began its Asia policy with a flourish, announcing the ‘Pivot to Asia’ and a ‘Rebalancing,’ implying a greater military and economic commitment to Asia than had been given by Bush. Although the ideas underlying the rebalancing were admirable, the follow-up was unimpressive. Then a downward spiral began as the Obama administration proved indecisive. A weak response to the Arab Spring, vacillation over Libya, and the failure to respond when the Syrian government used chemical weapons against its own population all contributed to a sense of weakness in Washington. The Chinese used the moment to press ahead with a more assertive policy in the South China Sea. Also, by 2009, the seriousness of the financial crisis in the United States was being understood, and it led many Chinese to conclude that the U.S. style of economic management was undercutting American strength. Thus, the combination of a flaccid foreign policy and economic turmoil created an ideal situation for the Chinese to be assertive. Shortly thereafter, the Chinese proceeded with the occupation and militarization of seven atolls in the South China Sea and ignored the ruling against this occupation by the International Tribunal of the Law of the Sea. This also led to a split among the Southeast Asian states with Myanmar, Thailand, Laos and Cambodia essentially aligning with China. Subsequently the Philippines began its current game of trying to keep its treaty with the U.S. while trying to extract more aid and trade from China. The Chinese also launched a series of new institutions and programs (The Asian Infrastructure Investment Bank, the New Development Bank, the Silk Road Fund, and the Belt and Road Initiative) designed to link their economy with their neighbors. The Future: This is not a story which appears likely to end with all the parties living happily ever after. The key variable will be China’s economic growth rate. If China can continue to grow annually at 6 percent or more, the attractions of its market and its aid will make it harder and harder to get its neighbors to resist Beijing’s blandishments. If China’s growth rate slows, however, that will provide more opportunities for the U.S., Japan, and India. China is not currently capable of directly challenging the U.S. militarily, so we are likely to face a situation of long-term competition, not war. If the U.S. can deal with its budget and trade deficits and avoid getting involved in unnecessary wars, U.S.-China relations will be tense but manageable. If the U.S. mishandles its economy or appears to be withdrawing from Asia, then Beijing is likely to test American commitments and conflict is much more probable.Michael Fabey, Military reporter and author of Crashback: The Power Clash Between the US and China in the Pacific: Barring any major course change in U.S. or Chinese foreign policy, the two countries’ military forces, especially their naval forces, are fated to continue to clash in the Western Pacific. The two countries have two diametrically opposed core beliefs that guide their military maneuvers in the region. The U.S. believes most of the airspace and sea lanes are internationally open regions for the benefit of any nation. China, however, lays claim to all that as Chinese territory and feels the rest of the world should acknowledge that as fact. The U.S., through various patrols, bases and partnerships, has managed to police the sea and air lanes for more than seven decades. While some Americans may complain about the cost of being the ‘world’s cop,’ the biggest beneficiaries during this time have been the U.S. consumers and businesses. To thrive, the U.S, must maintain the free flow of commerce from, to and through the Indo-Asian-Pacific. China claims ownership of the Western Pacific territories based on the nations regional dominance from centuries past. Chinese leaders feel they lost control of the area due to ‘unequal’ and ‘unfair’ treaties imposed on them by Western powers and China wants to right those wrongs so that it can once again be the true ‘middle kingdom’ — or, to put it another way, ‘the center of everything.’ The budding bromance between U.S. President Donald Trump and Chinese President Xi Jinping aside, there’s every indication the two countries’ military positions in the Western Pacific are hardening. For example, in the beginning of this year, the Pentagon released its new National Defense Strategy, in which, for the first time, the U.S. officially identified China, along with Russia, Iran and North Korea, as adversaries and threats. Since then, the U.S. Navy has continued with publicized freedom-of-navigation patrols in the region, exercised with allies and deployed new advanced weaponry in the Western Pacific that has increased tensions with Chinese military leaders. U.S. naval leaders embarrassed China by disinviting its forces from the annual Rim of Pacific (RIMPAC) exercise off the coast of Hawaii in July. The reason for the RIMPAC blackballing was Chinese militarization of bases it built on artificially created or augmented island features in the South China Sea, reneging on a promise President Xi had made against doing so just a couple of years ago. Xi has also started sending warships on patrols all throughout the region, building more aircraft carriers and warning U.S. military and political leaders he will cede none of China’s claimed territory – even though that territory also happens to be land, water and air claimed by other Asian nations, including U.S. allies and partners. China wants the South China Sea to be its Caribbean. As the U.S. controls the Americas, China wants to control Asia. And with Xi being named president for life, there’s no reason to believe China will retreat from its position.John Glaser, Director of foreign policy studies at the Cato Institute: The future of the Sino-American relationship is deeply uncertain. Though the United States will remain at the top of the international hierarchy for the foreseeable future, it is undoubtedly experiencing relative decline, while China is indisputably on the rise. The two titans of the 21st century maintain an uneasy rapport, conscious of each other’s power, suspicious of each other’s intentions, and covetous of the stature that accompanies global supremacy. In its approach to China over the past few decades, U.S. leadership has oscillated between dismissive arrogance, sincere cooperation and brazen competition. Tragic foul-ups, like the Clinton administration’s accidental bombing of the Chinese embassy in Belgrade and the in-air collision of a U.S. spy plane with a Chinese fighter jet early in the Bush administration, are seen in Beijing as the hubristic blunders of an intemperate bully. More deliberate taunts continue to this day, exemplified by the Obama administration’s pointless opposition to innocuous Chinese initiatives like the Asian Infrastructure Investment Bank, overwrought anxiety toward the Belt and Road Initiative and President Trump’s imperious trade war ultimatums. Yet, on crucial diplomatic and security efforts, from the Six Party Talks and the Paris climate accord to post-9/11 counterterrorism cooperation and the Iran nuclear deal, the United States capitalized on overlapping interests while respecting China’s position as a vital global player. Though less than perfect, the bilateral economic relationship has been immensely beneficial to both sides. However, the U.S. approach at times appears to resemble outright containment. The cutthroat geopolitical undertones of the so-called Pivot to Asia were lost on no one. Washington’s attempts to counter Beijing’s claims in the South China Sea have, if anything, hardened China’s posture. And the Trump administration’s blunt confrontational approach seems to have provoked even greater distrust across the Pacific. Rising powers must be managed carefully. China’s growing strength will surely translate into a more ambitious foreign policy, but how we deal with it is up to us. So far, China shows no inclination toward aggressive territorial conquest. Nor is it clear that a Chinese-led order would differ much on the essentials than the U.S.-led order. Indeed, China’s rise is more a threat to America’s status as the indispensable nation than any tangible threat to national security. Many great powers throughout history have let fixations about national prestige thrust them into destructive wars. If the Sino-American relationship is to remain peaceful, we must learn to forfeit such superficial pretensions and focus on narrow, concrete security and economic interests. Failure to do so may lock us into a costly cold war that neither country can win.James Holmes, J. C. Wylie Chair of Maritime Strategy at the Naval War College and author of Chinese Naval Strategy in the 21st Century: The Turn to Mahan: Not too long ago we used to talking about ‘managing’ the rise of China, as though it's in an established great power's gift to manage what an aspiring great power does. China has risen, and is a great power in its own right. China's leadership vows to make China into a ‘maritime power.’ It is a maritime power of note, and has been for some time. It has the power to attempt to make good on what President Xi Jinping calls its ‘Chinese Dream’ of national rejuvenation following what Xi, the Chinese Communist Party, and rank-and-file Chinese citizens regard as a long century of disgrace at the hands of foreign seaborne conquerors, dating all the way back to the Opium Wars starting in 1839. China's rise, and its evident desire to modify the liberal system of maritime trade and commerce over which the United States has presided since 1945, has set an interactive competition in motion. China wants to amend the system; the U.S. wants to preserve it. Which brings about this question: how much flex is there in either side's policies and strategies? I see little on Beijing's side. You have to hand it to China's leadership. This is a very open closed society, and has put the world on notice time and again about its aims. The party leadership has also gone on record repeatedly promising to deliver certain goals such as a union with Taiwan. As any negotiations specialist will tell you, a public promise like that represents one of the strongest commitments any leader can make. Fail to follow through and you paint yourself as weak and ineffectual. Your constituents will hold you accountable for failing to keep your promise -- perhaps in gruesome ways. Which makes the next question: how much tactical flex is there on China's side? Here's where we have some space. I believe China can be deterred. The Chinese are not irrational people. If the U.S. keeps deterring them one day at a time and convinces China they will keep doing so, perhaps over time both countries can come to some understanding that lets all of us coexist. So the burden is on the United States, its allies, and its friends to mount an adequate deterrent to Chinese mischief-making. Restore America’s physical power, display the resolve to use it under certain conditions, and make believers out of Beijing in U.S. power and resolve, and the Americans might yet pull this off. As far as America’s general attitude toward an accommodation with China goes, let's take our guidance from Theodore Roosevelt: speak softly and with humor; carry a big stick and show you know how to use it; be absolutely inflexible on things that are non-negotiable while being flexible on matters of secondary concern. Bottom line, we are in a long-term strategic competition, but relations need not degenerate into something really bad if we clear our minds, agree on our purposes, and resolve to compete with vigor.Lin Gang, Shanghai Jiaotong University Chair of the Academic Committee of the School of International and Public Affairs and Director of the Center for Taiwan Studies: Looking into the near future of U.S.-China relations, a permanent state of competition seems unavoidable. For Beijing, trade conflict with the United States may hurt the Chinese economy, but the damage is manageable thanks to its growing market for domestic consumption. Beijing does not want to have a trade war with America, but it will not give in easily either. For Washington, President Trump is acutely concerned about the huge trade deficit with China and high-tech transference to that country. The administration’s resoluteness to push back against China is revealed in the U.S. National Security Strategy and National Defense Strategy, in which China is labeled as one of America’s major “strategic competitors." For the first time since World War II the United States claims that “our competitive military advantage has been eroding." Trump’s blaming of China as an “economic enemy" and his recent decision to impose tariffs on $34 billion worth of Chinese products, followed by the unusual passage of U.S. warships through the Taiwan Strait amid the heightened tensions, convey a clear message. Meanwhile, Taiwan’s strategic importance to Washington has been reemphasized. Since the beginning of 2017, Washington has increased its security cooperation with the island, particularly in nontraditional spheres like anti-terrorism. Besides, the sale of a $1.42 billion arms package to Taiwan on June 29, 2017 is the first such sale under the Trump administration, which has surely overshadowed the Xi-Trump summit in April of that year and threatened to undermine PRC-U.S. relations. In addition, the U.S. Congress has pushed for new resolutions to upgrade Washington-Taipei relations, enhance the security of Taiwan and bolster Taiwan’s participation in international organizations. Some proposals may lead to a port call by the U.S. Navy to Taiwan and sending uniformed Marines to the AIT in Taiwan. Another decision that would exert a serious impact on the cornerstone of U.S.-China relations is the Taiwan Travel Act (TTA), a breakthrough in Washington’s and Taipei’s unofficial relationship at the price of U.S.-China ties. This does not mean that U.S.-China relations are doomed to be pessimistic as the two powers are comprehensively interdependent. In the words of Graham Allison, the two countries are in a state of mutual assurance of economic destruction (MAED). Strategically, without China’s cooperation, America can achieve only limited outcomes in global affairs. However, more efforts and dialogues are indispensable for crafting a working relationship between the two countries in the years to come.Kishore Mahbubani, Professor in the Practice of Public Policy at the National University of Singapore and author of Has the West Lost It?: George Orwell once famously remarked that “to see what is in front of one’s nose needs a constant struggle." This aptly describes America’s struggle to understand its changing relationship with China. It is absolutely certain that within the next decade, China will become the world’s number one economy and America will become number two. The logical thing for American policymakers to do is therefore to prepare for becoming number two. However, it may be psychologically impossible for America to do so. I learned this when I chaired a forum in Davos in January 2012 entitled The Future of American Power in the 21st Century. During the forum, Republican Senator Bob Corker explained that “the American people absolutely would not be prepared psychologically for an event where the world began to believe that it was not the greatest power on earth." Since Americans are psychologically incapable of preparing for such a world, they will wake up with a rude shock when the IMF announces one day that America has become the number two economy. In this process, it is inevitable that Americans will react angrily and feel cheated by China. This political shock is predictable but unavoidable. Yet, all is not lost. Unlike America, China is not aiming for global primacy. It only wants to secure peace and prosperity for its 1.4 billion people. As a result, even after China becomes number one, it will not try to dislodge America from its claim of primacy. China is quite happy to uphold the rules-based international order that America and the West have gifted to the world. As Xi Jinping said in Davos in 2017, “We should adhere to multilateralism to uphold the authority and efficacy of multilateral institutions. We should honor promises and abide by rules." In view of this, it is actually possible for America and China to achieve a new modus operandi with a philosophy of “live and let live", in which neither America nor China challenges each other’s core interests. China will not try to displace America from regions that America values, like the Middle East. However, it would expect America to be sensitive to its core interests, like Taiwan. All these adjustments will require sensitive diplomatic negotiations. The time to prepare for them is now.Robert Ross, Professor of Political Science at Boston College and Associate at the John King Fairbank Center for Chinese Studies at Harvard University: U.S.-China relations are worse today than at any time since 1971, when Henry Kissinger visited China. And they will get worse. Scholars and policy makers have long observed that rising powers and power transitions contribute to international instability and that the rise of China would be destabilizing. Over the past ten years, China has significantly narrowed the gap in U.S.-China capabilities in maritime East Asia, challenging American naval dominance. It should not be surprising that there is now heightened U.S.-China competition; the power transition is taking place in a region of vital security importance for both powers. Moreover, as this trend continues and the gap continues to narrow, tensions between the U.S. and China will increase. China’s rise has contributed to its impatience to improve its security in East Asian waters. Surrounded by U.S. alliances and military bases, it has challenged the regional security order. It has carried out a rapid build-up of its navy, island building and oil drilling in the South China Sea, coercive policies against South Korea and Philippines in retaliation for alliance cooperation with the United States, and challenges to the maritime sovereignty claims of Japan, the Philippines and Vietnam. Chinese policy has been effective; American allies have begun to distance themselves from U.S. initiatives that challenge Chinese interests. Not content to allow China to erode U.S. maritime dominance, the United States has responded with a range of counter measures, including the pivot to East Asia, assignment of a greater percentage of navy ships in East Asia, frequent and high-profile naval challenges to Chinese maritime claims, and the development of the Indo-Pacific strategy. Predictably, U.S. initiatives have not curtailed rising China’s efforts to reshape the strategic order of rise nor stabilized U.S. alliances. China’s naval modernization and ship-production rate continue to close the gap in U.S.-China capabilities, contributing to further Chinese activism and heightened concern among American allies over the effectiveness of U.S. defense commitments. As U.S. naval dominance continues to erode and its alliance system experiences greater pressure, the United States will respond with stronger strategic initiatives designed to constrain Chinese activism and reassure our allies of its resolve to balance China’s rise. The power transition will continue, and, as China approaches naval parity, tensions will intensify. Power transitions inevitably cause heightened great power conflict. The stakes are high, and, in security affairs, it is a zero-sum conflict. Nonetheless, the course and outcome of the U.S.-China power transition is not predestined. The course of the conflict, including the likelihood of war, will be determined by leaders making discreet decisions, influenced by their personalities, domestic politics, including nationalism, and international dynamics. Equally important, the outcome of the transition will be influenced by decades-long economic and political trends in China and the United States. In this respect, despite China’s recent rise, the United States possesses many enduring advantages that can favor it over the long-term.Dr. Ruan Zongze, Executive Vice President and Senior Fellow at the China Institute of International Studies: The United States, make no mistake, will continue to be a major power, but the world is being ushered into a new era of an emerging multipolar global order. It is characterized by the unsettling direction of the U.S. and the rise of China. What happens between China and the United States will largely reshape the global geo-economic and geo-political landscape in the 21st century. Conventional wisdom holds that the rise of the China means the demise of the United States. And the success of China in the World Trade Organization (WTO) means the failure of the WTO. The reality, however, tells a different story. If history serves as a reminder, the China-U.S. relationship is by no means a zero-sum game. Surprisingly, recent history has shown that the relationship is productive as well as mutually beneficial. Beijing and Washington forged strong ties to work to deter the Soviet threat during the Cold War period, to fight against terrorism after September 11, to prevent the global economy from collapsing amid the financial meltdown on Wall Street in 2008. Similarly China’s success in the WTO actually proves the success of the WTO as a whole, since it has brought about economic growth and prosperity for the rest of the world. Nevertheless, now is a defining moment for the China-U.S. relationship. More than anybody in memory, President Donald Trump has challenged basic assumptions of the relationship that held true for the past four decades. The growing tensions between the world's top two economics have sparked debate and uncertainty over the future orientation of U.S.-China relations, and will definitely generate negative effects on the world economy. Unlike the former Soviet Union, China has worked very hard to integrate itself into the current international system by recognizing de-facto American supremacy. Furthermore, China’s integration into the global system makes itself a stakeholder. China is committed to champion an open world economy and a multilateral trade regime as global growth remains unsteady despite signs of recovery. Beijing called for concerted efforts in fostering new drivers for growth, promoting a more inclusive growth and improving global economic governance. I trust that an eventual restoration of a more friendly and cooperative relationship should be expected. Yet Sino-American relations will head towards a bumpy road before they get better.Robert Sutter, Professor of Practice of International Affairs at George Washington University and author of U.S.-Chinese Relations: Perilous Past, Pragmatic Present: Self-absorbed and increasingly powerful, authoritarian China works covertly and overtly against American interests and influence at home and abroad. A populist domestic upsurge in American politics demands higher priority for U.S. interests. The result is the most substantial negative change in American policy toward China in fifty years. The Trump administration and congressional officials register broad anger and growing angst on how China over the years has unfairly taken advantage of America’s open economy and accommodating posture to strengthen Chinese power for use against U.S. leadership. The stakes are more serious today because China is widely seen as a peer competitor and the trajectory of the U.S.-China power balance is viewed as favoring Beijing. American military, intelligence and domestic security departments are implementing administration strategies focusing on China as a predatory and revisionist rival seeking dominance. They have widespread support in Congress. Longstanding American concerns with China’s growing military challenges combine with newly prominent concerns about Beijing’s efforts to infiltrate and influence U.S. opinion and politics. Chinese state-directed exploitation of the U.S.-backed international economic order to weaken America and advance China’s economic capacity now pose an ominous challenge to American leadership in the modern economy. Trump administration trade and investment policies have been conflicted; the recent focus on punitive tariffs is costly and controversial. American media and public opinion have begun to discern the overall grim turn in U.S. government polices against China but it’s unclear how far they will go in supporting the shift from past positive U.S. engagement with China. Americans seeking to accommodate Beijing and 'meet China half-way' likely will be drowned out by growing disclosures on how China has manipulated such positive American approaches to strengthen Beijing and weaken America. China is determined to pursue its current course. The impasse will grow. For now, neither side wants conflict or war, but both are prepared to test the other in advancing in such sensitive areas as improving U.S. ties with Taiwan and China’s widespread espionage and manipulation of American opinion. Chinese promises and reassurances count for little. A serious challenge or decline in China’s perceived power would alter American angst over the prospect of Chinese dominance, possibly allowing for more mutual accommodation.Xie Tao, Professor at the School of English and International Studies at Beijing Foreign Studies University and author of Living with the Dragon: How the American Public Views the Rise of China: Chinese leaders often hail economic cooperation as the “ballast" and the “propeller" of the U.S.-China relationship. Now that the two countries are fighting a hundred-billion-dollar trade war, is the ship of bilateral relations doomed to sink? Not necessarily. There has been no anti-American protest since President Trump launched the trade war on July 6. The absence of such protests could imply that ordinary Chinese are not terribly upset by Trump’s hostile actions. And there is a good reason for them not to feel so, as initial concessions offered by Beijing—to reduce import tariffs, for example—mean cheaper foreign products and services for the average Chinese consumer. That is to say, the Chinese public does not seem to be in a mood for a sharp downturn in bilateral economic relations. More important, given the Chinese government’s tight control over nationalist protests, muted public reactions could be a powerful signal that Beijing is still willing to seek a compromise with Washington. For one thing, such a war will probably harm the Chinese economy much more than it does the American economy. After all, the United States is China’s largest export market, and there is simply no alternative that is as big and lucrative as the American market. Besides, Washington could retaliate by sharply curtailing China’s access to U.S. high technology. The Chinese economic juggernaut has been driven primarily by exports and investment, not by innovation. The fate of ZTE—a Chinese telecommunication giant sanctioned by the U.S. Commerce Department—amply illustrates China’s overwhelming dependence on American technology. In the high-tech realm at least, America is indispensable to China, but not the other way around. If the above analysis is correct, then the trade war will likely wind down fairly soon. But Chinese willingness to compromise should not be interpreted as a sign of weakness, that is, as evidence that it pays to get tough on China. Admittedly, getting tough on China seems to be the new consensus in Washington, due to increasing concerns over Chinese influence in Western societies (so-called sharp power) as well as the perceived failure of Beijing to embrace democracy, adopt a market economy, and defer to American leadership. The danger of this consensus, though, is that it will undoubtedly empower those in Beijing who are opposed to deepening political and economic reform. Getting tough on China may well produce a tougher China that sees no choice but to engage in intense and comprehensive competition—geopolitical, economic, and ideological—with Washington. But is America ready for a new cold war?Xu Feibiao, Director for the Division of Trade and Investment Studies at the China Institutes of Contemporary International Relations: No one would deny that the relationship between the U.S. and China, the biggest power on this planet and the emerging big power, is the most important and complicated one in the 21st century. For the last forty years, especially after the opening up and reform in China in the 1970s, the Sino-U.S. relationship has become increasingly consolidated through ups and downs. The two countries are so integrated economically and financially that any disruption in bilateral relation would lead to great losses and market vibrations in both countries and even the whole world. Consider these facts: U.S. enterprises in China make profits of more than $200 billion every year, and China exports annually more than $500 billion in goods to the United States, the majority of which are produced, exported, and turned into profit by companies from America and other foreign countries. The two countries have increasing entwined interests in this globalized and interconnected world, and both of them benefit greatly from the relationship. From Chinese perspective, an improved and consolidated Sino-U.S. relationship usually means a favorable external development environment for China, which is desperately needed, not to mention the technology, capital, and the huge market of the U.S., which are all important factors for China’s economy. From the U.S. side, the benefits are also huge and obvious. It has greatly advanced America’s strategic interests, such as balancing and weakening the Soviet Union, winning the Cold War and the war on terror and countering the proliferation of WMDs. The relationship will continue to reap benefits for the U.S. in the future in such issues as climate change, countering extremism, bringing peace to the Korean peninsula and solving problems with Iran, cyber security and so on. The huge volume of cheap and high-quality goods from China, the second largest market in the world, also benefits the U.S. greatly. One thing worth noting is that during and after the financial crisis, China has continued to buy trillions of dollars in U.S. debts and assets, and has anchored her currency to the dollar, which acts to help maintain the predominant role of the U.S. in the current international monetary system. That trajectory of bilateral relations and deepening interlocked interests of both countries mean that Sino-U.S. relations will stay stable for the near future. Of course, there will be more and more disruptions and frictions in the next few years because of the ‘Trump shock’ from the American side. To a large extent, the media over exaggerate the ongoing ‘trade war.’ There is a small possibility that the two countries would turn against each other as enemies, but it is also not likely that the two will become good friends. This time the adversary facing U.S. is different: a nuclear giant that is dedicating to opening up and the building up of a community of common destiny.Wang Jisi, President of Peking University's Institute of International and Strategic Studies and editor of The Rise of China and a Changing East Asian Order: The China-U.S. relationship is not doomed for a Cold War-style confrontation. Neither is it destined to avoid a deadly conflict. The more likely trend on the road ahead is a further deterioration of relations until both China and the United States come to the realization - maybe following a tragic crisis - that they have to negotiate a ‘deal’ of mutual tolerance. Looking back at history, it is China, not America, that has played a decisive role in shaping the relationship. China changed the feature of its ties with America in 1949 when the People’s Republic was founded. China again reshaped the contour of the relationship after 1978 when its leadership decided to embark on reform and opening. Since then China-U.S. economic and cultural relations have prospered. Major changes in American politics, like the civil rights movement, the financial crisis in 2008, and changes of administration in Washington, have hardly affected the landscape of U.S.-China interactions. Now, once again, it is mainly China’s power and behavior that incur a shifting of the bilateral ties. The Americans are alarmed by China’s expansion of global influence, exemplified by the Belt and Road Initiative, and its reinforcement of the role of the state in economy and society, as well as the consolidation of the Communist Party leadership with its ideology. The current trade friction is only a reflection of the deep-rooted, enlarging cleavages of political values, power structures, and national goals between the two giants. The United States has now identified China as a major external threat while its bonds with other countries are debilitated. China seems unruffled in its march toward becoming a global game-changer defiant against Western values and practices. However, both countries are encountering daunting challenges at home that are much greater and more urgent than geostrategic contentions abroad. Both China and America are undergoing dramatic domestic transformations, the destinations of which will determine whether, and how, they can find a way to renovate the links that have benefited the two sides over the last forty years. China is changing more rapidly than America. But China will continue to change in its own pace and track, and ultimately in the right direction. To dodge an ill fate, the two countries should engage each other in a benign competition to see which country is better able to make their people happier and more dignified, and who will earn more respect in the world.Measure Measure

Picture 1

How AI Can Amplify Human Competencies

How AI Can Amplify Human Competencies Advanced systems will continue to help people do their jobs better instead of replacing them.advertisement MIT SMR FrontiersThis article is part of an MIT SMR initiative exploring how technology is reshaping the practice of management.Though artificial intelligence systems are already becoming a part of daily life, recent debates about AI and the future of work have gained a sense of urgency. The late Stephen Hawking worried that humans “couldn’t compete, and would be superseded" by machines, while Tesla founder Elon Musk has suggested that competition in AI could lead to World War III. The Economist reported earlier this year that nearly half of the jobs in 32 developed countries surveyed by the Organisation for Economic Co-operation and Development (OECD) were vulnerable to automation, declaring, “a wave of automation anxiety has hit the West." Ken Goldberg, professor and department chair of industrial engineering and operations research at UC Berkeley, is pushing back on all of that. Instead of embracing the notion that robots will surpass humans and replace us in the workforce (a concept referred to as “singularity"), he argues for “multiplicity" — a hybrid view of how new technologies and people might work in partnership toward human goals. To an extent, he says, this is how AI is already starting to function. MIT Sloan Management Review correspondent Frieda Klotz spoke with Goldberg about a future in which AI is a complement, not a threat, to workers. What follows is an edited and condensed version of their conversation. MIT Sloan Management Review: What areas of robotic technology is your lab currently working on? Goldberg: We’re developing robot software for tasks as wide-ranging as warehouse order fulfillment, home decluttering and robot-assisted surgery. What’s common to all the work we’re doing is the idea of algorithms and learning for robots, improving our ability to analyze data and examples and then use that to build control policies — or models — for how robots can move. The area I’ve been working on for 35 years is robot grasping — how to reliably pick up objects. It’s easy for humans, but it’s a problem for robots. Basically, every robot is still a klutz, and that’s a big challenge if you want to develop one that will declutter a home or pack boxes in a warehouse.

Picture 1

List of companies involved in quantum computing or communication - Wikipedia

List of companies involved in quantum computing or communication From Wikipedia, the free encyclopedia Jump to navigation Jump to search Company Date initiated Area Affiliate University or Research Institute Headquarters 1QBit 1 December 2012 Computing Vancouver, Canada River Lane Research 2016 Applications, software & benchmarking Cambridge, UK Accenture[1] 14 June 2017 Computing imec[2] Silicon Quantum Computing Belgium Airbus[3] 2015 Computing Blagnac, France Aliyun (Alibaba Cloud)[4] 30 July 2015 Computing/Communication[4][5] Chinese Academy of Sciences [6][5][7] Hangzhou, China AT&T[8] 2011 Communication Dallas, TX, USA Atos[9] Communication Bezons, France Booz Allen Hamilton[10] Computing Tysons Corner, VA, USA BT[11] Communication London, UK Carl Zeiss AG[12] University College London Oberkochen, Germany Cambridge Quantum Computing Limited[13] Communication Cambridge, UK D-Wave 1 January 1999 Computing Burnaby, Canada Elyah[14] 06 June 2018 Computing Dubai, UAE Everettian Technologies[15] 1 September 2017 Computing Waterloo, Canada Fujitsu[16] 28 September 2015 Communication University of Tokyo Tokyo, Japan Google QuAIL[17] 16 May 2013 Computing UCSB Mountain View, CA, USA HP[18][19] Computing[18]/Communication[19] Palo Alto, CA, USA Hitachi Computing University of Cambridge, University College London Tokyo, Japan Honeywell[20][21] Computing Georgia Tech,[20] University of Maryland[21] Morris Plains, NJ, USA HRL Laboratories Computing Malibu, CA, USA Huawei Noah's Ark Lab[22] Communication Nanjing University Shenzhen, China IBM[23] 10 September 1990[24] Computing MIT[25] Armonk, NY, USA ID Quantique 1 July 2001 Communication Geneva, Switzerland ionQ[26][27] Computing University of Maryland, Duke University College Park, MD, USA InfiniQuant[28] Communication Max Planck Institute for the Science of Light, University of Erlangen-Nuremberg Erlangen, Germany Intel[29] 3 September 2015 Computing TU Delft Santa Clara, CA, USA KPN[30] Communication The Hague, Netherlands Lockheed Martin Computing University of Southern California, University College London Bethesda, MD, USA MagiQ Communication Somerville, MA, USA Microsoft Research QuArC 19 December 2011 Computing TU Delft, Niels Bohr Institute, University of Sydney, Purdue University, University of Maryland, ETH Zurich, UCSB Redmond, WA, USA Microsoft Research Station Q 22 April 2005 Computing UCSB Santa Barbara, CA, USA Mitsubishi[31] Communication Tokyo, Japan NEC Corporation[32] 29 April 1999[33] Communication University of Tokyo Tokyo, Japan Nokia Bell Labs[34][35] Computing University of Oxford Murray Hill, NJ, USA Northrop Grumman Computing West Falls Church, VA, USA NTT Laboratories[36] Computing Bristol University Tokyo, Japan Q-Ctrl[37][38][39] 2017 Computing[note 1] Sydney, Australia Qbitlogic International[40] 2014[41] Computing Atlanta, GA, USA[41] QC Ware[42] 2014[43] Palo Alto, California, USA[43] Qilimanjaro[44] 2018 Computing Barcelona, Spain Qnami[45] 2017 Quantum Sensing University of Basel Basel, Switzerland Qrithm 2016 Algorithms Pasadena, California, USA Quantum Numbers Corp 2016 Quantum Encryption Université de Sherbrooke Brossard, Quebec, Canada QuintessenceLabs Communication Deakin, ACT, Australia QxBranch 2014 Computing Washington, D.C., USA Raytheon/BBN[46] Computing/Communication MIT Cambridge, MA, USA Rigetti Computing Computing Berkeley California, USA Siemens Healthineers Computing University College London Erlangen, Germany. Delft Circuits Hardware QuTech Delft, The Netherlands RIKEN[47] Computing Tokyo University of Science Wako, Japan Strangeworks Computing Austin Texas, USA Toshiba[48] Communication University of Cambridge Tokyo, Japan Xanadu[49] 2017 Computation Toronto, Canada Zapata Computing[50] 2018 Computing Cambridge, Massachusetts, USA Notes[edit] Jump up ^ Q-Ctrl also develops quantum sensing technology that has applications outside of quantum computing and communication. References[edit] Jump up ^ "(Press Release) Accenture Labs and 1QBit Work with Biogen to Apply Quantum Computing to Accelerate Drug Discovery | Accenture Newsroom". newsroom.accenture.com. Retrieved 2017-10-04. Jump up ^ https://www.imec-int.com/en/quantum-computing. Missing or empty |title= (help) Jump up ^ Tovey, Alan (2015-12-26). "Airbus's quantum computing brings Silicon Valley to the Welsh Valleys". ISSN 0307-1235. Retrieved 2017-11-02. ^ Jump up to: a b "(Press Release) Alibaba Group". www.alibabagroup.com. Retrieved 2017-10-04. ^ Jump up to: a b "Alibaba looks to quantum computing for next-gen cloud | Business Cloud News". www.businesscloudnews.com. Retrieved 2017-11-01. Jump up ^ "Alibaba Launches Global Research Program for Cutting-Edge Technology Development". www.businesswire.com. Retrieved 2017-11-01. Jump up ^ Horwitz, Josh. "Alibaba is plowing $15 billion into R&D with seven new research labs worldwide". Quartz. Retrieved 2017-11-01. Jump up ^ "(Press Release) AT&T Labs Research - Photon Entanglement over the Fiber-Optic Network". www.research.att.com. Retrieved 2017-10-04. Jump up ^ "(Press Release) Race Against Time: Securing our Future Data with Quantum Encryption - Ascent". Ascent. 2015-03-16. Retrieved 2017-10-04. Jump up ^ "Press Release". Jump up ^ "PDF" (PDF). Jump up ^ "Industry Projects | UCL Quantum". www.uclq.org. Retrieved 2017-10-04. Jump up ^ Computing, Cambridge Quantum. "Certified True Randomness Created by Cambridge Quantum Computing". www.prnewswire.co.uk. Retrieved 2017-11-04. Jump up ^ "Elyah". www.elyah.io. Retrieved 2018-02-21. Jump up ^ "Everettian Technologies Inc". www.everettian.com. Retrieved 2018-02-21. Jump up ^ "(press release) University of Tokyo, Fujitsu, and NEC Succeed in Quantum Key Distribution from Single-Photon Emitter at World-Record Distance of 120 km - Fujitsu Global". www.fujitsu.com. Retrieved 2017-10-04. Jump up ^ "(Press Release) Launching the Quantum Artificial Intelligence Lab". Research Blog. Retrieved 2017-10-04. ^ Jump up to: a b "HP Labs : Quantum Information Processing (QIP)". www.hpl.hp.com. Retrieved 2017-10-04. ^ Jump up to: a b "Research at HP Labs : Information Dynamics Lab : Research Areas : Quantum Computing". www.hpl.hp.com. Retrieved 2017-11-01. ^ Jump up to: a b "(Press Release) Development of New Ion Traps Advances Quantum Computing Systems". www.news.gatech.edu. Retrieved 2017-10-04. ^ Jump up to: a b "Ion-trap quantum computer is programmable and reconfigurable - physicsworld.com". physicsworld.com. Retrieved 2017-11-01. Jump up ^ http://stuex.nju.edu.cn/en/a/Research_Organizations/20150808/483.html. Missing or empty |title= (help) Jump up ^ "Theory of quantum computing and information group - IBM". researcher.ibm.com. 2016-07-25. Retrieved 2017-10-04. Jump up ^ C.H. Bennett et al., J. Cryptology 5, 3 (1992) doi:10.1007/BF00191318 Jump up ^ "MIT-IBM Watson AI Lab". Jump up ^ Hernandez, Daniela (2016-08-03). "Scientists Harness Quantum Physics to Build a Programmable Computer". Wall Street Journal. ISSN 0099-9660. Retrieved 2017-10-04. Jump up ^ Gregg, Aaron (2017-01-01). "Start-up IonQ sees opportunity in still-developing area of quantum computers". Washington Post. ISSN 0190-8286. Retrieved 2017-11-01. Jump up ^ "Satellite-Based QKD" (PDF). Jump up ^ "(Press Release) Intel Invests US$50 Million to Advance Quantum Computing | Intel Newsroom". Intel Newsroom. Retrieved 2017-10-04. Jump up ^ "(Press Release) Pers". KPN Corporate (in Dutch). Retrieved 2017-10-04. Jump up ^ "Press Release". Jump up ^ "(Press Release) University of Tokyo, Fujitsu, and NEC Succeed in Quantum Key Distribution from Single-Photon Emitter at World-Record Distance of 120 km". NEC. Retrieved 2017-10-04. Jump up ^ Y. Nakamura, Yu. A. Pashkin, & J. S. Tsai, Nature 398, 786 (1999) doi:10.1038/19718 Jump up ^ "Quantum Computing & Communications - Bell Labs". www.bell-labs.com. Retrieved 2017-10-04. Jump up ^ "The Future of Quantum Computing Could Depend on This Tricky Qubit". WIRED. Retrieved 2017-11-04. Jump up ^ Carolan, Jacques; Harrold, Christopher; Sparrow, Chris; Martín-López, Enrique; Russell, Nicholas J.; Silverstone, Joshua W.; Shadbolt, Peter J.; Matsuda, Nobuyuki; Oguma, Manabu (2015-07-09). "Universal linear optics". Science: aab3642. arXiv:1505.01182 . doi:10.1126/science.aab3642. ISSN 0036-8075. PMID 26160375. Jump up ^ "Quantum start-up Q-Ctrl "unmixing the soup" of qubit decoherence". Computerworld. Retrieved 2017-11-27. Jump up ^ "CSIRO bid to keep 'spooky action' going". Financial Review. 2017-11-02. Retrieved 2017-11-27. Jump up ^ "Q-Ctrl homepage". Jump up ^ "Innovation hub – not another Alice in Wonderland". MaltaToday.com.mt. Retrieved 2017-11-04. ^ Jump up to: a b "QbitLogic | Crunchbase". Crunchbase. Retrieved 2017-11-04. Jump up ^ Wigglesworth, Robin (November 1, 2017). "Renaissance, DE Shaw look to quantum computing for edge". Financial Times. Retrieved 2017-11-04. ^ Jump up to: a b "QC Ware | Crunchbase". Crunchbase. Retrieved 2017-11-04. Jump up ^ "Qilimanjaro". Retrieved 2018-04-26. Jump up ^ "Qnami - The quantum wave". Retrieved 2018-05-25. Jump up ^ Communications, Raytheon Corporate. "Raytheon: Quantum information". www.raytheon.com. Retrieved 2017-10-04. Jump up ^ "Superconducting Quantum Simulation Research Team | RIKEN". www.riken.jp. Retrieved 2017-10-04. Jump up ^ Limited, Toshiba Research Europe. "Toshiba: CRL - Quantum Information". www.toshiba.eu. Retrieved 2017-10-04. Jump up ^ "Xanadu Quantum Computing Inc". www.xanadu.ai. Retrieved 2018-03-25. Jump up ^ "Zapata Computing Inc". www.zapatacomputing.com. Retrieved 2018-05-24. Categories: Quantum computingLists of technology companies

Picture 1

Review: How Laws of Physics Govern Growth in Business and in Cities

How Laws of Physics Govern Growth in Business and in Cities The new book of Geoffrey West, a theoretical physicist, comes with a mouthful of a subtitle that suggests he has unlocked the secrets of human existence — “Scale: The Universal Laws of Growth, Innovation, Sustainability and the Pace of Life in Organisms, Cities, Economies, and Companies" (Penguin).Spoiler alert: He hasn’t. But don’t let this dissuade you from joining him on an enchanting intellectual odyssey. Mr. West’s core argument is that the basic mathematical laws of physics governing growth in the physical world apply equally to biological, political and corporate organisms. On its face, his book’s objective is to contribute to an overarching behavioral science of what it calls “highly complex systems." But the book is also a satisfying personal and professional memoir of a distinguished scientist whose life’s work came to be preoccupied with finding ways to break down traditional boundaries between disciplines to solve the long-term global challenges of sustainability.The central observation of “Scale" is that a wide variety of complex systems respond similarly to increases in size. Mr. West demonstrates that these similarities reflect the structural nature of the networks that undergird these systems. The book identifies three core common characteristics of the hierarchal networks that deliver energy to these organisms — whether the diverse circulatory systems that power all forms of animal life or the water and electrical networks that power cities.First, the networks are “space filling" — that is, they service the entire organism. Second, the terminal units are largely identical, whether they are the capillaries in our bodies or the faucets and electrical outlets in our homes. Third, a kind of natural selection process operates within these networks so that they are optimized. These shared network qualities explain why when an organism doubles in size, an astonishing range of characteristics, from food consumption to general metabolic rate, grow something less than twice as fast — they scale “sublinearly." What’s more, “Scale" shows why the precise mathematical factor by which these efficiencies manifest themselves almost always relate to “the magic No. 4."Mr. West also provides an elegant explanation of why living organisms have a natural limit to growth and life span following a predictable curve, as an increasing proportion of energy consumed is required for maintenance and less is available to fuel further expansion. When he turns to cities, Mr. West shows that infrastructure growth scales in analogous sublinear fashion. Hence, the number of gas stations or length of roads needed when a city doubles its size reflects similar economies of scale. But relevant socioeconomic qualities actually scale superlinearly by the same factor. And while it is good news that large cities produce higher wages and more patents per inhabitant, they also generate relatively greater crime and disease. This conundrum is at the heart of Mr. West’s sustainability concerns. Theoretically, unbounded growth of cities generated by superlinear scaling “if left unchecked, potentially sow[s] the seeds of their inevitable collapse."Despite his reliance on the analysis of huge troves of data to develop and support his theories, in the concluding chapters, Mr. West makes a compelling argument against the “arrogance and narcissism" reflected in the growing fetishization of “big data" in itself. “Data for data’s sake," he argues, “or the mindless gathering of big data, without any conceptual framework for organizing and understanding it, may actually be bad or even dangerous."In presenting his own provocative and fascinating conceptual framework, Mr. West manages to deliver a lot of theory and history accessibly and entertainingly. Yet it is not clear whether that framework is robust enough to be applied productively to the business realm as he attempts to do. Mr. West concedes early on that the strength of mathematical correlations on which he relies decreases as he moves from the biological to the urban to the corporate. Until relatively recently, Mr. West was unable to get funding to access a database of historical corporate information. At one point during the book, he actually seems to blame this challenge for the particularly thin results in this domain. The problems with his analysis of the business sector, however, may be more systemic. First, it is at least questionable whether the constantly shifting hierarchal network structures of corporate organizations are consistent with the three fundamental characteristics of networks upon which his framework is based. Notably, a wide range of behavioral economics research, grounded in the pioneering work of Daniel Kahneman and Amos Tversky, suggests that the optimization requirement is not likely to be met. Furthermore, the consistent “decay" rates of corporations identified by Mr. West — calculated by the longevity of independent public corporations over time — does not correspond to any consistent change in underlying activity analogous to “death" in living organisms. Even in the context of bankruptcy, which Mr. West looks at separately from corporate “death" from mergers and acquisitions, good businesses with bad capital structures often continue “life" under new corporate form. It is not evident how meaningful mathematical calculations could be that treat such situations the same as failed businesses that are simply liquidated in bankruptcy for scrap value. That “Scale" fails to realize the full promise of its title does not diminish the magnitude of its actual contribution and insight. In the 16th century, Francois Rabelais, a French scholar, admonished that “science without conscience is the ruin of the soul." Mr. West’s warning that big data without a theoretical framework is the ruin of science is an important contemporary corollary caution that “Scale" will hopefully establish for the next generation of scholars.Jonathan A. Knee is professor of professional practice at Columbia Business School and a senior adviser at Evercore Partners. His latest book is “Class Clowns: How the Smartest Investors Lost Billions in Education."NYTimesMeasure Measure

Picture 1

The ‘Zombie Gene’ That May Protect Elephants From Cancer

The ‘Zombie Gene’ That May Protect Elephants From CancerThe ‘Zombie Gene’ That May Protect Elephants From CancerWith such enormous bodies, elephants should be particularly prone to tumors. But an ancient gene in their DNA, somehow resurrected, seems to shield the animals.ImageElephants, like their forebears mastodons and mammoths, carry a unique gene whose proteins kill off potentially cancerous cells.CreditScanpix, via ReutersAug. 14, 2018Elephants ought to get a lot of cancer. They’re huge animals, weighing as much as eight tons. It takes a lot of cells to make up that much elephant. All of those cells arose from a single fertilized egg, and each time a cell divides, there’s a chance that it will gain a mutation — one that may lead to cancer.Strangely, however, elephants aren’t more prone to cancer than smaller animals. Some research even suggests they get less cancer than humans do. On Tuesday, a team of researchers reported what may be a partial solution to that mystery: Elephants protect themselves with a unique gene that aggressively kills off cells whose DNA has been damaged.AdvertisementSomewhere in the course of evolution, the gene had become dormant. But somehow it was resurrected, a bit of zombie DNA that has proved particularly useful. Vincent J. Lynch, an evolutionary biologist at the University of Chicago and a co-author of the paper, published in Cell Reports, said that understanding how elephants fight cancer may provide inspiration for developing new drugs.You have 4 free articles remaining.Subscribe to The Times“It might tell us something fundamental about cancer as a process. And if we’re lucky, it might tell us something about how to treat human disease," Dr. Lynch said.Scientists have puzzled over cancer, or the lack thereof, in big animals since the 1970s. In recent years, some researchers have started carrying out detailed studies of the genes and cells of these species, searching for unexpected strategies for fighting the disease.Some of the first research focused on a well-studied anticancer gene called p53. It makes a protein that can sense when DNA gets damaged. In response, the protein switches on a number of other genes. AdvertisementA cell may respond by repairing its broken genes, or it may commit suicide, so that its descendants will not have the chance to gain even more mutations.In 2015, Dr. Lynch and his colleagues discovered that elephants have evolved unusual p53 genes. While we only have one copy of the gene, elephants have 20 copies. Researchers at the University of Utah independently made the same discovery. Both teams observed that the elephant’s swarm of p53 genes responds aggressively to DNA damage. Their bodies don’t bother with repairing cells — they only orchestrate the damaged cell’s death.Dr. Lynch and his colleagues continued their search for cancer-fighting genes, and they soon encountered another one, called LIF6, that only elephants seem to possess.In response to DNA damage, p53 proteins in elephants switch on LIF6. The cell makes LIF6 proteins, which then wreak havoc.Dr. Lynch’s experiments indicate that LIF6 proteins make their way to the cell’s tiny fuel-generating factories, called mitochondria. The proteins pry open holes in the mitochondria, allowing molecules to pour out. The molecules from mitochondria are toxic, causing the cell to die.Advertisement“This adds an important piece to the puzzle," said Dr. Joshua D. Schiffman, a pediatric oncologist at the Huntsman Cancer Institute at the University of Utah who has also studied cancer in elephants. More experiments are needed to confirm that LIF6 works the way Dr. Lynch and his colleagues propose, Dr. Schiffman added. “As a start, I think this is fantastic," he said.LIF6 has a bizarre evolutionary history, as it turns out.All mammals carry a similar gene, simply called LIF. In our own cells, it performs several different jobs, such as sending signals from one cell to another. But almost all mammals — ourselves included — have only one copy. The only exceptions to that rule are elephants and their close relatives, such as manatees, Dr. Lynch and his colleagues found. These mammals have several copies of LIF; elephants have ten.These copies arose thanks to sloppy mutations in the ancestors of manatees and elephants more than 80 million years ago. These newer copies of the original LIF gene lack a stretch of DNA that acts as an on-off switch. As a result, the genes could not make their proteins. (Humans also carry thousands of copies of so-called pseudogenes.) After the ancestors of elephants evolved ten LIF genes, however, something remarkable happened: One of these dead genes came back to life. That gene is LIF6.AdvertisementSomewhere in the course of elephant evolution, a cellular mutation inserted a genetic switch next to LIF6, enabling the gene to be activated by p53. The resurrected gene now made a protein that could do something new: attack mitochondria and kill damaged cells. To find out when the LIF6 gene first came back to life, the researchers took a close looks at DNA retrieved from fossils. [Like the Science Times page on Facebook.| Sign up for the Science Times newsletter.]Mastodons and mammoths also carried LIF6. Scientists estimate that they shared a common ancestor with modern elephants that lived 26 million years ago.Dr. Lynch speculated that LIF6 came back to life at the same time that the ancestors of living elephants evolved extra copies of p53. As they developed more powerful defenses against cancer, the animals could begin reaching their enormous sizes.Elephants likely evolved other new genes that follow p53’s orders, Dr. Lynch predicted. He also suspects that elephants have also evolved ways to fight cancer that are separate from p53 altogether. “I think it’s all of the above," he said. “There are lots of stories like LIF6 in the elephant genome, and I want to know them all." AdvertisementTrending on NYTimes Subscribe now. Just $15.99 $9.99 a month. SubscribeOne subscription.Endless discovery.For just $15.99 $9.99 a month.SubscribeMeasure Measure

Picture 1

Maria Konnikova Shows Her Cards

Maria Konnikova Shows Her CardsMaria Konnikova Shows Her CardsThe well-regarded science writer took up poker while researching a book. Now she’s on the professional circuit. Image"Luck is just … randomness," said Maria Konnikova. "That’s what I wanted to write about. Poker was a way into it."CreditJoshua Bright for The New York TimesAs a science writer at The New Yorker, Maria Konnikova, 34, focuses on the brain, and the weird and interesting ways people use their brains. Dr. Konnikova is an experimental psychologist trained at Columbia University. But her latest experiment is on herself. For a book she’s researching on luck and decision-making, Dr. Konnikova began studying poker. Within a year, she had moved from poker novice to poker professional, winning more than $200,000 in tournament jackpots. This summer Poker Stars, an online gaming site, began sponsoring Dr. Konnikova in professional tournaments. We spoke recently for two hours at the offices of The Times. An edited and condensed version of the conversation follows.[Like the Science Times page on Facebook. | Sign up for the Science Times newsletter.]You’ve taken a year’s sabbatical from The New Yorker to play on the professional poker circuit. Why?I’d been thinking, for a while, about what my next book was going to be. I was interested in the theme of skill versus chance and was looking for a way to get into it. A friend suggested I read John von Neumann’s “Theory of Games and Economic Behavior," the foundational text of game theory. Von Neumann, as you know, was one of the geniuses of the 20th century — the hydrogen bomb, computing, economics. And he’d been a poker player. It turned out that all of game theory came out of poker!You have 3 free articles remaining.Subscribe to The TimesWhen he was trying to understand how strategic decision-making worked, he concluded that poker was the perfect analog, because it was a blend of skill and chance and because, over the long run, skill can win. I decided that poker was the way to go. I knew I’d need to spend a few months living in that world. I thought, “I’m going to have to dedicate myself to this like a career, because otherwise it’s just going to be ‘a writer dabbles in the world of poker.’"Did you have a background in the game?No, no. When I started this, I didn’t know how many cards were in a deck. I hate casinos. I have zero interest in gambling.AdvertisementThen I met Erik Seidel, one of the best poker players in the world. He agreed to become my coach, though he told me, “You’re a hard worker, and you have a good background for this, but who knows if you’re going to be any good?"It’s been an unexpected journey. I don’t think anyone could have predicted that I would have gone in less than a year from not knowing how many cards were in a deck to winning a major poker title. What did that involve? I’ve been studying, playing, living, breathing poker for eight to nine hours a day. Every day! When I’m between events and in New York, I’m reading, watching videos or live-streaming very good players.There might be a specific concept I want to work on, and I’ll watch some videos of people doing this and take notes. Sometimes I’ll go to New Jersey and hop onto the poker website at an internet cafe. Online poker is illegal in New York, but not in Jersey. When Erik Seidel said you had the right background, what did he mean?I think he was talking about my background in experimental psychology. I did a doctorate on overconfidence and risky decision-making with Walter Mischel, who invented the “marshmallow test." I wanted to see if people with high levels of self-control made better decisions in risky conditions, like in the stock market. Usually, people with high self-control do so much better at everything than people with low self-control. But it ends up that in unpredictable environments like the stock market, successful high self-control people — when in an environment where control is taken away from them — take longer to figure things out. They are too confident and won’t take negative feedback from the environment. AdvertisementWhereas people with lower self-control and who aren’t as successful — they’re like, “Uh oh, a bad thing is happening. I guess I should actually figure that out." Are the other poker pros nice to you? For the most part. I’ve been very lucky because my coach introduced me to high-level players. They are not only brilliant but nice, and they’ve taken me under their wing. So yeah, there are people who aren’t nice to me. I mean, I’ve been called everything at the poker table. I’ve been propositioned at the poker table — like, actually propositioned! Was it an attempt to throw you off your game or to get you to his room? Probably both. I called the “floor," which is management, and had him moved to another table. If poker is an analog to real life, does it help or hurt to be a woman? Obviously, the first thing people notice about me is my gender. And people stereotype. When you see someone looking a certain way, you assume they play a certain way. So once I figure out how they view women, I can figure out how to play against them. They’re not seeing me as a poker player, they’re seeing me as a femalepoker player. There are people who’d rather die than be bluffed by a woman. They’ll never fold to me because that’s an affront to their masculinity. I never bluff them. I know that no matter how strong my hand, they are still going to call me because they just can’t fold to a girl. AdvertisementOther people think women are incapable of bluffing. They think if I’m betting really aggressively, it means I have an incredibly strong hand. I bluff those people all the time. There are people who think that women shouldn’t be at a poker table, and they try to bully me. So, what do I do? I let them. And I wait to be in a good position so that I can take their chips. Just like life, right?Your last book, “The Confidence Game," was about con artists. Is there a thematic connection to the topics you write about?The motive for this book was about getting back to what I’d studied in grad school: the illusion of control. How much of our lives do we actually control — and can we tell the difference? People often ascribe everything good to skill. And then when bad things happen, they say, “Oh, it’s bad luck." Or they say, “You make your luck." That’s just empirically impossible. And it drives me crazy because luck, by definition, is something you can’t make. Luck is just … randomness. So that’s what I wanted to write about. Poker was a way into it.Now, it is true that I’ve long been preoccupied with the darker side of humanity. I’m interested in deviations because they make you notice the normal. In psychology, you learn a lot about the brain by looking at the deviant cases. When you ask if my books have a progression, I’d say the world of con artists has a lot of overlap with poker because of belief, deception, figuring out what people are representing.AdvertisementDo you have any insights on why grifting schemes appear to be proliferating? Fraud really thrives in moments of great social change and transition. We’re in the midst of a technological revolution. That gives con artists huge opportunities. People lose their frame of reference for what can and can’t be real. Are there more con artists now? It’s more that technology made conning easier. Before, if you wanted to con someone, you had to do expend a lot of energy doing research. Was a person a good target? Today, we’re all on Twitter and Facebook, putting out all this information about ourselves. With cellphones and emails, it’s much easier to inundate a large number of people and to catch one person at a vulnerable moment. In the past, the grifter would have a lot of misses. Now, they don’t care if they’ll have a thousand misses. All they need is one hit.You’ve earned so far over $200,000 at the table. Few writers make that sort of money. Will you be quitting your day job?For the next year, yes. But I’m never going to stop being a writer. Why can’t I do both? I love poker. Why would I stop? A version of this article appears in print on Aug. 14, 2018, on Page D5 of the New York edition with the headline: Rewiring Her Brain To Win at Poker. Order Reprints | Today’s Paper | SubscribeSign up for Science TimesWe’ll bring you stories that capture the wonders of the human body, nature and the cosmos. You agree to receive occasional updates and special offers for The New York Times products and services. AdvertisementTrending on NYTimes Morning Briefings. The news you need. Free to your inbox by 7 a.m.Sign Upcampaign: exo_rollout_newsletter_inyt, creative: Dock, source: optimizely, creator: nytExo-KPM One subscription. Endless discovery. For just $15.99 $9.99 a month. Subscribe Measure Measure

Picture 1

Nano-optic endoscope sees deep into tissue at high resolution

Nano-optic endoscope sees deep into tissue at high resolution July 30, 2018, Harvard John A. Paulson School of Engineering and Applied Sciences Researchers adopt metalens technology in a new endoscopic optical imaging catheter to better detect disease, including cancer Credit: Harvard University/Massachusetts General Hospital The diagnosis of diseases based in internal organs often relies on biopsy samples collected from affected regions. But collecting such samples is highly error-prone due to the inability of current endoscopic imaging techniques to accurately visualize sites of disease. The conventional optical elements in catheters used to access hard-to-reach areas of the body, such as the gastrointestinal tract and pulmonary airways, are prone to aberrations that obstruct the full capabilities of optical imaging. Now, experts in endoscopic imaging at Massachusetts General Hospital (MGH) and pioneers of flat metalens technology at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), have teamed up to develop a new class of endoscopic imaging catheters—termed nano-optic endoscopes—that overcome the limitations of current systems. The research is described in Nature Photonics. "Clinical adoption of many cutting-edge endoscopic microscopy modalities has been hampered due to the difficulty of designing miniature catheters that achieve the same image quality as bulky desktop microscopes," said Melissa Suter, an assistant professor of Medicine at MGH and Harvard Medical School (HMS) and co-senior author of the paper. "The use of nano-optic catheters that incorporate metalenses into their design will likely change the landscape of optical catheter design, resulting in a dramatic increase in the quality, resolution, and functionality of endoscopic microscopy. This will ultimately increase clinical utility by enabling more sophisticated assessment of cell and tissue microstructure in living patients." "Metalenses based on flat optics are a game changing new technology because the control of image distortions necessary for high resolution imaging is straightforward compared to conventional optics, which requires multiple complex shaped lenses," said Federico Capasso, the Robert L. Wallace Professor of Applied Physics and Vinton Hayes Senior Research Fellow in Electrical Engineering at SEAS and co-senior author of the paper. "I am confident that this will lead to a new class of optical systems and instruments with a broad range of applications in many areas of science and technology" Scanning electron micrograph image of a portion of a fabricated metalens. Credit: Harvard SEAS "The versatility and design flexibility of the nano-optic endoscope significantly elevates endoscopic imaging capabilities and will likely impact diagnostic imaging of internal organs," said Hamid Pahlevaninezhad, Instructor in Medicine at MGH and HMS and co-first author of the paper. "We demonstrated an example of such capabilities to achieve high-resolution imaging at greatly extended depth of focus." To demonstrate the imaging quality of the nano-optic endoscope, the researchers imaged fruit flesh, swine and sheep airways, and human lung tissue. The team showed that the nano-optic endoscope can image deep into the tissue with significantly higher resolution than provided by current imaging catheter designs. The images captured by the nano-optic endoscope clearly show cellular structures in fruit flesh and tissue layers and fine glands in the bronchial mucosa of swine and sheep. In the human lung tissue, the researchers were able to clearly identify structures that correspond to fine, irregular glands indicating the presence of adenocarcinoma, the most prominent type of lung cancer. "Currently, we are at the mercy of materials that we have no control over to design high resolution lenses for imaging," said Yao-Wei Huang, a postdoctoral fellow at SEAS and co-first author of the paper. "The main advantage of the metalens is that we can design and tailor its specifications to overcome spherical aberrations and astigmatism and achieve very fine focus of the light. As a result, we achieve very high resolution with extended depth of field without the need for complex optical components." Next, researchers aim to explore other applications for the nano-optic endoscope, including a polarization-sensitive nano-optic endoscope, which could contrast between tissues that have highly-organized structures, such as smooth muscle, collagen and blood vessels. Explore further: Versatile ultrasound system could transform how doctors use medical imaging More information: Nano-optic endoscope for high-resolution optical coherence tomography in vivo , Nature Photonics , DOI: 10.1038/s41566-018-0224-2 , https://www.nature.com/articles/s41566-018-0224-2 Measure Measure

Picture 1

Masayoshi Son in His Own Words. All 303,513 of Them

Masayoshi Son in His Own Words. All 303,513 of Them ByPavel AlpeyevAugust 5, 2018, 11:00 PM GMT+2 SoftBank Reports 49% Jump in 1Q Operating ProfitMasayoshi Son has a lot going on these days. The SoftBank Group Corp. founder oversees the largest technology fund in history and is putting money into ride-hailing, e-commerce, digital payments, satellites, semiconductors, agriculture and cancer detection – just for a start. How to make sense of it all? Son uses earnings briefings to explain his evolving vision. So ahead of SoftBank’s first-quarter results on Monday, Bloomberg News analyzed his comments from previous presentations to see how his focus has changed and what may lay ahead. Natural language processing software helped sift through briefings from the past 12 years, spanning more than 303,000 words. Son has shown impressive fortitude over that stretch: He hasn’t missed a single one of the 48 reports.A New ObsessionSon has sharply increased his references to artificial intelligence in recent months, proclaiming that it will redefine every industry and create many new ones. His comments about AI, robotics and the Internet of Things spiked afterSoftBank’s $32 billion acquisition of chipmaker ARM Holdings Plc in 2016. Now, Son is increasingly turning to the topics as a unifying theme for his wide-ranging deals. Pepper, SoftBank’s humanoid robot, shared the stage with Son in 2015, but has faded in prominence since then. Down on the InternetThe internet was the centerpiece of Son’s vision of the future for years. That evolved into the mobile internet with the advent of the smartphone, when Son got exclusive rights to sell the iPhone in Japan. The topics come up less and less now, even though SoftBank’s stakes in e-commerce giants Alibaba Group Holding Ltd. and Yahoo Japan Corp. are among its most valuable assets. Getting Over SprintSon couldn't stop talking about Sprint Corp. in the years when he was struggling to turn around the money-losing U.S. carrier he agreed to buy in 2012. The agreement in April to sell a controlling stake to T-Mobile US Inc. has freed him to focus on SoftBank's longer-term future. The Information RevolutionaryThe idea of an information revolution may conjure up visions of 1999 for most people. Not Son. He’s using the term more than ever now that he’s building a $100 billion investment fund and wielding unparalleled influence among the world's entrepreneurs. For Son, it’s a synonym for disruption. He thinks no industry is safe from seismic, technological change. The includes his own wireless business, which is part of his motivation in evolving toward something new. 300-Year CompanySon likes to talk about creating a company that can last 300 years. Far-fetched, yes. But he has shown a surprising ability to evolve from one business to another. When he was 19, he invented an electronic translator and then sold it to raise money for his own company. Since then, he’s resold software, published magazines, organized trade shows, offered broadband services and managed wireless operators on two continents. Today, he says he wants to create a digital-age conglomerate with hundreds of portfolio companies. He calls this the cluster of No. 1s strategy – the idea of taking non-controlling stakes in industry-leading companies and encouraging them to cooperate. Will it work? He has a few more years to try. — With assistance by Adrian Leung, and Jason ClenfieldMeasure Measure

Picture 1

Bottled Waters of the World

Bottled Waters of the World An exciting new source for bottled water is melted icebergs. Iceberg water is the most technically challenging and physically hazardous bottled water to produce. Specially equipped boats are required to lift the ice out of the sea and return it to shore for rinsing, melting, and bottling. As icebergs can flip and threaten to sink a boat at any time, extreme caution must be taken. The weight and logistical difficulties mean only limited quantities can be gathered. Icebergs are formed through “calving", which is when chunks of ice break off a glacier into the sea. The icebergs then drift with ocean currents and eventually melt. There are only a few places in the world where icebergs are harvested: the remote arctic coasts of the Svalbard Archipelago, western Greenland, and eastern Canada. Harvesting icebergs for enjoyment as fine water utilizes a resource which otherwise would have been wasted and contributed to global sea level rise. Scientists have also found that the increase in iceberg calving due to global warming is damaging fragile polar seafloor marine environments. Drastic reductions in biodiversity have been observed in some regions where more icebergs are scraping the seafloor. While the scale of the iceberg water industry is small, every berg harvested does reduce the damage that can be done to the arctic marine environment. Great care needs to be taken in selecting the right icebergs. They must never have melted and refrozen which would introduce impurities. Experienced gatherers are able to identify ice which fell as snow thousands or even tens of thousands of years ago. That snow was then buried inside ice layers, trapped by successive snow falls, and moved to the center of a glacier where it remained protected until calving into the sea. Icebergs which came from the bottom or side of the glacier must be avoided as these will have picked up sediments and potentially been exposed to modern pollutants. The ice in properly selected bergs has virtually no impurities since it fell in the pre-industrial era. It has a soft, light and airy taste, with the lowest mineral content of almost all bottled waters. Iceberg waters have a great story and are perfect for making ice cubes for special cocktails. Measure Measure

Picture 1

Swiss exports hit new quarterly record

Swiss exports hit new quarterly record Swiss exports are flying high. Photo: bayberry/Depositphotos Swiss exports matched and surpassed their recent growth, with pharmaceuticals leading the charge. Swiss exports for the second quarter of 2018 totalled 55.7 billion francs (€47.8 billion), a new quarterly record, states a communication from the Swiss Federal Customs Administration. Pharmaceuticals, machinery, precision instruments and watchmaking were some of the sectors that registered new heights, thanks in part to record exports to China, the USA and Germany. Other sectors that registered positive growth were jewellery, cars, food, beverages and tobacco. Swatch, the world's largest watchmaker, announced record sales in the USA for 2018. "Consumer demand, particularly from millennials, for authentic innovative brand products is greatly increasing on a worldwide scale, regardless of region or price segment," Swatch said in a statement. But pharmaceuticals were the driving force behind the surge in exports, representing 42 per cent of the global growth. Swiss pharmaceutical giant Novartis saw its net profit in the second quarter rise to $7.8 billion (€6.7 billion), as net sales jumped nine percent to $13.2 billion (€11.33 billion). Asia was the key market behind the growth in exports, but the USA and Europe also proved to be fertile ground for Swiss products. Swiss exports to Germany were on a record high (+4.5 per cent), but some sectors also registered substantial growth in other EU markets. Swiss jewellery firms in France, for example, reported an increase of 598 million francs (€513 million) in the second quarter of 2018. While exports continued to hit new record highs for the fifth quarter in a row, imports declined, due primarily to a 51 per cent reduction in the import of aeronautical vehicles, communicated the customs administration. The value of Swiss exports has grown by 4.8 billion Swiss francs (€4.1 billion) since the first quarter of 2017. The 2018 second quarter trade balance surplus was 4.58 billion Swiss francs (€3.9 billion), an increase of 1.4% in seasonally adjusted terms. READ MORE: Swiss-based G20 regulator unveils plan to monitor cryptocurrency threat Share this article Measure Measure

Picture 1

The future of manufacturing

The future of manufacturing Making things in a changing world John Hagel, John Seely Brown, Duleesha Kulasooriya, Craig A. Giffi, Mengmeng Chen Deloitte Insights March 31, 2015 The changing economics of production and distribution, along with shifts in consumer demand and the emergence of “smart" products, are pushing manufacturers to explore radically new ways of creating and capturing value.Executive summaryManufacturing is no longer simply about making physical products. Changes in consumer demand, the nature of products, the economics of production, and the economics of the supply chain have led to a fundamental shift in the way companies do business. Customers demand personalization and customization as the line between consumer and creator continues to blur. Added sensors and connectivity turn “dumb" products into “smart" ones, while products increasingly become platforms—and even move into the realm of services. As technology continues to advance exponentially, barriers to entry, commercialization, and learning are eroding. New market entrants with access to new tools can operate at much smaller scale, enabling them to create offerings once the sole province of major incumbents. While large-scale production will always dominate some segments of the value chain, innovative manufacturing models—distributed small-scale local manufacturing, loosely coupled manufacturing ecosystems, and agile manufacturing—are arising to take advantage of these new opportunities. Meanwhile, the boundary separating product makers from product sellers is increasingly permeable. Manufacturers are feeling the pressure—and gaining the ability—to increase both speed to market and customer engagement. And numerous factors are leading manufacturers to build to order rather than building to stock. In this environment, intermediaries that create value by holding inventory are becoming less and less necessary. Together, these shifts have made it more difficult to create value in traditional ways. At the same time, as products become less objects of value in their own right and more the means for accessing information and experiences, creating and capturing value has moved from delivering physical objects to enabling that access. These trends can affect different manufacturing sectors at different rates. To determine the speed and intensity of the coming shifts in a particular sector, companies should consider factors including the extent of regulation, product size and complexity, and the sector’s level of digitization. As these trends play out in a growing number of manufacturing sectors, large incumbents should focus more tightly on roles likely to lead to concentration and consolidation, while avoiding those prone to fragmentation. The good news is that three roles driven by significant economies of scale and scope—infrastructure providers, aggregation platforms, and agent businesses—offer incumbents a solid foundation for growth and profitability. Due to competitive pressures, large manufacturers may experience increasing pressure to focus on just one role, shedding aspects of the business that might distract from the company becoming world class in its chosen role. The likely result is a significant restructuring of existing product manufacturers. The growth potential of adopting a scale-and-scope role can be further enhanced by pursuing leveraged growth strategies. Rather than focusing solely on “make vs. buy" options, large players will have an opportunity to connect with, and mobilize, a growing array of new entrants, many of which will target fragmenting portions of the manufacturing value chain in order to deliver more value to their customers. Two emerging business models, “product to platform" and “ownership to access," seem particularly promising in terms of driving leveraged growth strategies. Finally, given the emergence of more complex ecosystems of fragmented and concentrated players across a growing array of manufacturing value chains, businesses that understand emerging “influence points" will have a significant strategic advantage. As the manufacturing landscape evolves and competitive pressure mounts, driven by the needs of ever more demanding customers, position will matter more than ever. In all the decisions about where and how to play in this new environment, there is no master playbook—and no single path to success. But by understanding these shifts, roles, and influence points, both incumbents and new entrants can give themselves the tools to successfully navigate the new landscape of manufacturing.IntroductionOn the cavernous show floor of the 2015 International Consumer Electronics Show in Las Vegas, you come across yet another new company and product. FirstBuild is presenting the Chillhub, an open-source USB-connected refrigerator. You may wonder about the uses of such a product. Not to worry: Members of the FirstBuild community have already come up with more than 50 possibilities—including an LED disinfecting light, a hyperchiller, and an egg carton that doubles as an egg cooker. Several of these ideas are now being prototyped to test their market viability.1 FirstBuild is a new entity, but it’s not another Silicon Valley startup. Instead, it’s a microfactory set up in Louisville, Kentucky, by General Electric’s appliance division. Its mission: to design, build, and market-test new innovations. For FirstBuild, GE has partnered with Local Motors, a small company that crowdsources and manufactures automobiles, to apply its platform to home appliances. The goal is to tap the extensive reach, creativity, and skills of online and off-line communities to ideate, prototype, build, and sell more products, far more quickly than would be possible within GE’s established systems and structures. In short, GE is taking a page from the startup playbook in a bid to stay relevant and competitive. FirstBuild is both an admission of the limitations of current scale-based R&D systems and a bold move to benefit from the structural speed and agility of low-capital-intensive leveraged models. In many ways, its creation reflects a growing recognition of the shifts underway in the manufacturing industry—shifts that are making manufacturing’s traditional business model, that of simply making things and selling them at a profit, increasingly obsolete. The first of these shifts is the end, for all intents and purposes, of a manufacturer’s ability to create and capture value solely by making “better" products. For decades, manufacturers have been pursuing “more for less," focusing on delivering increasing product quality and functionality to consumers at lower and lower prices. But while this model served manufacturers well when improvements were relatively few and far between, accelerating technological change—and the consequent shortening of the product life cycle—has reduced the window of opportunity for capturing value from any given improvement to a sliver of what it once was. And in an era of global competition, most of the already small gains in margin from product improvement are often competed away, with the consumer as the beneficiary. With delivering more for less no longer a sustainable strategy, forward-thinking manufacturers are looking for alternative ways to create and capture value. They are being forced to rethink old notions of where value comes from, who creates it, and who profits from it, broadening their idea of value as a point-of-sale phenomenon to include a wide array of activities and business models. It is no longer just about selling the product, but about gaining a share of the value it generates in its use. Consider the value that Netflix generates through the use of televisions as a conduit for streaming entertainment—or the value that businesses such as Zipcar and Uber create through the use of cars for on-demand mobility. Manufacturers are waking up to possibilities such as these and, in the process, starting to transform the way they do business. Against this backdrop, a second, parallel shift is taking place. It arises from a confluence of factors moving scale upstream and fragmentation downstream in the manufacturing supply chain. Advances in technology and changes in marketplace expectations are making it possible for relatively small manufacturers to gain traction and thrive in an industry where scale was once a virtual imperative. Thanks to technologies that are reducing once-prohibitive barriers to entry, and encouraged by fragmenting consumer demand, modestly sized new entrants now pose a legitimate threat to large, established incumbents. Indeed, in the race to find new ways to create and capture value, their smaller size and agility may give many market entrants an advantage over larger, older organizations, if only because incumbents may find it difficult to change entrenched business models and practices to accommodate new marketplace realities. Moreover, the new entrants are not necessarily even manufacturing companies in the traditional sense. The growing popularity of “smart" products, for instance, has prompted some technology companies to make forays into the manufacturing space, either by developing software to run the products, or by producing the products themselves. Incumbents may, of course, choose to meet new entrants on their own ground, finding ways to create and capture value that rely more on capitalizing on a product’s value-creating attributes than on selling the product itself. But there’s another option. Some incumbents, viewing the proliferation of fragmented smaller players as a market in itself, may opt to support niche manufacturers by providing them with products and services for which scale still provides an advantage—platforms for knowledge sharing, components upon which niche manufacturers can build, and the like. Due to competitive pressures, large incumbents will likely consolidate further, providing the foundation for a large number of fragmented smaller players dedicated to addressing the increasingly diverse needs of the consumer. The result is an ecosystem that includes both niche players and large scale-and-scope operators. Facing these two macro shifts, manufacturers—both incumbents and new entrants, from both traditional and nontraditional backgrounds—must understand the forces driving the industry’s evolution in order to choose their path forward. How can large incumbents take advantage of emerging tools, techniques, and platforms? What lessons can new entrants and incumbents alike learn from organizations from other industries that have staked a claim in the manufacturing space? And how can organizations find profitable and sustainable roles in the future manufacturing landscape? With these questions in mind, we take a deeper dive into four areas whose changing dynamics underlie both of the shifts we have described, exploring the trends and factors that influence each one:Consumer demand: Consumers’ rising power and unmet needs around personalization, customization, and co-creation are causing niche markets to proliferate.Products: Technological advances enabling modularity and connectivity are transforming products from inert objects into “smart" devices, while advancements in materials science are enabling the creation of far more intricate, capable, and advanced objects, smart or otherwise. At the same time, the nature of the product is changing, with many products transcending their roles as material possessions that people own to become services to which they buy access.Economics of production: Technologies such as additive manufacturing are making it possible to cost-effectively manufacture products more quickly, in smaller and smaller batches.Economics of the value chain: Digital technologies are narrowing the distance between manufacturer and consumer, allowing manufacturers to bypass traditional intermediaries. Each of these shifts—in customer demand, the nature of products, the economics of production, and the economics of the value chain—contributes to an increasingly complex economic environment that makes value creation more challenging while making value capture even more crucial (see figure 1). After exploring the evolving landscape, this report lays out steps both entrants and incumbents can begin to take to effectively navigate this landscape of the future. When navigating the path to enhanced value creation and value capture, large incumbents, especially, should determine the urgency of change in a given market, focus on the most promising business types, pursue leveraged growth opportunities, and identify (and, where possible, occupy) emerging influence points. The path to success is specific to each business, and businesses should envision their organizations in new ways if they want to make the most of the available opportunities.The changing nature of consumer demand Spend a few minutes browsing through Pinterest (the popular “scrapbooking" site for collecting and sharing visual ideas and images) or Etsy (the massive online sales platform for individual craftspeople) and you’ll get a visceral sense of shifting retail demand. More and more, buyers are seeking—and finding—products that are personalized and customized to fit their individual needs. In this landscape, Pinterest reveals desire, and Etsy embodies the ability to fulfill it. Chris Anderson described this phenomenon in his book The Long Tail: an increased shift away from mainstream products and markets at the head of the demand curve, replaced by a gravitation toward multiple, ever-expanding niches that constitute the curve’s “long tail."2 The ubiquity of platform and application (app) models, represented most famously by the iTunes and Android platforms, exemplifies both the increase of niche demand and the ability to service it to capture value.3 At the same time, consumers are embracing personalization, customization, and cocreation, generating an abundance of niche markets.Personalization and customizationAt its simplest, personalization—adding to or changing a product to fit the individual—can be as simple as monogramming a towel; customization involves creating products attractive to specific niche markets. But the current rise in both personalization and customization is more than cosmetic. It’s the difference between adding your name to a mass-produced object and generating a product made for your unique body, between buying a pair of drugstore reading glasses and receiving chemotherapy optimized for your particular tumor. Personalization (to the individual) and customization (to a niche) have always taken place. Historically, however, they’ve been the province of the wealthy, with offerings such as custom tailoring and high-performance automobiles. No longer. Digital technologies, especially the Internet, have made personalization and customization available to a wide range of consumers, making it more cost-effective to satisfy demand. As a result, tailored products for niche markets are becoming increasingly available and accessible, raising consumers’ expectations of being able to get exactly what they want as opposed to settling for mass-produced items. This, in turn, is fragmenting the consumer marketplace into numerous niche markets, each of which represents an opportunity for manufacturers capable of delivering the desired goods and creating and capturing value through economies of scope rather than economies of scale. One such niche market is the tiny home movement, in which residents seek to live well in smaller spaces as a way of reducing costs or increasing geographic mobility. These consumers seek out products tailored to their limited spaces, favoring the deliberately compact, multifunctional, and aesthetically bold. Websites such as apartmenttherapy.com4 and tinyhouseblog.com5 tout ideas and profile living spaces appealing to the community. A growing number of craftspeople and small manufacturers reach these buyers through sites like Etsy; mass-market furniture sellers such as IKEA also focus on serving them. Another niche market being transformed by customization and personalization is the disability community—which encompasses not only those with physical disabilities, including blindness and mobility issues, but also those with perceptual and learning differences such as dyslexia.6 A growing number of startups are developing technologies and manufacturing new products that can be customized or personalized for this audience at a radically lower cost than even two or three years ago. Lechal is a Hyderabad-based hardware startup whose haptic devices offer tactile feedback for the visually impaired; one product incorporates electronics into shoe soles, aiding navigation with directional vibrations.7 Many such companies are using technologies designed for the mainstream to serve their niche. For example, the recent explosion of consumer-grade additive manufacturing technologies and printers has led Enable to build a platform matching owners of 3D printers to children requiring artificial limbs. The company has also developed open-sourced designs for printable custom-fit artificial limbs. At the commercial level, related technology reaches a wider audience with products such as Invisalign’s custom dental braces and Normal Earphones’ custom 3D-printed earphones.Consumers as creatorsBeyond their rising interest in personalization and customization, consumers are also increasingly apt to engage in the creation, or at least the conceptualization, of the products they buy. At base, this phenomenon represents a shift in identity from passive recipient to active participant—a blurring of the line between producer and consumer. One manifestation of this trend is the growing popularity of the maker movement—a resurgence of DIY craft and hands-on production among everyone from Lego-obsessed kids to enthusiastic knitters, electronics geeks to emerging product designers. Those involved in “making" see themselves in a different light in relation to the products they use. Some actually take on the mantle of maker, taking pride in creating rather than consuming. Others, while not producing objects themselves, become collaborators, engaging with maker culture to support and shape the products they buy, and deriving identity from that engagement. As more and more makers begin selling their creations and customizations, it’s given rise to a thriving ecosystem of platforms and niche providers, including learning tools, digital repositories, service bureaus, tool shops, kit manufacturers, crowd platforms, and online and off-line retail outlets. Most of these niche providers are small startups and microbusinesses, though several have grown to a point where they’re challenging incumbents—and redefining how demand is both expressed and satisfied. The maker movement is aptly named. Its biggest and best-known event, MakerFaire, was launched by Maker Media in 2005. By 2014, there were more than 100 MakerFaires around the world, with flagship events in the San Francisco Bay Area and New York attracting more than 200,000 visitors.8 The so-called “gym for makers," TechShop, recently opened its eighth location in Arlington, VA. Across the United States, more than 200 such “hacker spaces" give users access to the tools and training they need to create in wood, metal, plastic, fabric, and electronics while communing with likeminded creators.9 Even those outside maker culture are becoming more likely to seek involvement in shaping what they purchase. This involvement can take the form of voting for favorite designs on an ideation platform, crowdfunding a hardware startup, or engaging an Etsy seller to create a custom item. More-involved individuals might customize or hack a build-it-yourself product kit, design and build pieces from scratch, or sell their creations to others within or outside the movement. This incipient change in identity from consumer to creator is also driving a change in how brands are perceived. Many consumers want to get past the marketing to create a more authentic relationship with the products they consume. This impulse feeds into the growing “buy local" movement as well as into the growth of retailers such as Etsy (which brought in more than $1.35 billion in 2013), connecting buyers to craftspeople and their stories.10 At all levels of engagement, participants endeavor to put a personal stamp on the products they consume—and put pressure on manufacturers large and small to deliver products that enable a higher level of engagement and authenticity. As consumer demands shift toward personalization, customization, and creation, we will see an increasing proliferation of niche markets where, rather than “settling" for mass-market products, consumers will be able to find or even create products suited to their individual needs. In this environment, manufacturers fully leveraged to produce large volumes of limited numbers of products will likely be at a disadvantage, forcing them to rethink their place in the manufacturing landscape and the value they bring the consumer. The good news is that amid the fragmentation, new roles and new sources of value can emerge for large players.The changing nature of products In parallel with, and in response to, shifts in consumer demand, the nature of products is changing. “Dumb" products are getting “smarter"—more connected, intelligent, and responsive. At the same time, how consumers view and use products is changing, redefining both the factors that determine product value and how companies can capture it. As clothing becomes “wearables," cars “connected cars," and lighting “smart lighting," will the majority of the benefits accrue to the product manufacturer, the software platform owner, the creator of the “killer app" that makes the product come alive, or the company that generates insights from the resulting big data? The questions raised go far beyond the technical challenges of manufacturing. As products create and transmit more data, how much value will be located in the objects themselves, and how much in the data they generate, or the insights gleaned from it? And what of the option to rethink products as physical platforms, each the center of an ecosystem in which third-party partners build modular add-ons? Each of these questions envisions a change in the nature of products—and a much larger shift in how value is created and captured.From dumb to smartThis year’s Consumer Electronics Show (CES) in Las Vegas featured nearly 100 smart watches and health and fitness trackers.11 At the simplest level, these devices logged activity; more complex versions tracked breathing patterns and measured body composition.12 In deference to consumers’ demands for good design, nearly all paid at least some attention to aesthetics. Quite a few led with their looks: Smart-device startup Misfit partnered with Swarovski to produce the Swarovski Shine Collection, nine crystal-studded jewelry pieces, each concealing an activity tracker.13 Such items are good examples of the quantified self movement, in which participants use technology to track and analyze the data of their daily lives. As yet, most are still stand-alone tools. The next generation of these devices, however, is likely to be integrated into our clothing and accessories so seamlessly that they become “wearables." The emergence of technologically enabled products such as activity trackers is only one facet of a looming transition in physical goods. In the near future, many, if not most, “dumb" products will become “smart"—falling under the umbrella of the Internet of Things (IoT). The pervasive expansion of sensors, connectivity, and electronics will extend the digital infrastructure to encompass previously analog tasks, processes, and machine operations. Gartner analysts predict that by 2020, the IoT will include nearly 26 billion devices, adding $1.9 trillion in global economic value.14 In a recent survey, nearly 75 percent of executives indicated that their companies were exploring or adopting some form of IoT solution, with most seeing integrating IoT into the main business as necessary to remain competitive.15 The evolution of “smart" products presents manufacturers with challenges on multiple levels. Some of these products incorporate complex software or interact with users’ smart devices, while others use cutting-edge materials—such as electroactive polymers and thermal bimetals—that continually adapt to users’ changing needs. Further, not all products will be smart in the same way and, as smart products become more complex, it will be increasingly difficult for any single manufacturer to develop an entire hardware/software stack in house. To capture value in a world where products are as much about software as about physical objects, manufacturers should consider their business models in the light of four factors that play into generating value from smart products: integrated software, software platforms, the applications (apps) that run on those platforms, and data aggregation and analysis. While integrated software handles all the performance functions needed by the hardware housing it, software platforms act as translators, managing the hardware based on new instructions delivered through easily updatable apps. This platform-plus-app model allows for a greater range of customization and personalization, and makes it easier to update products in response to shifting needs and contexts.From product to platformThe drive for customization and personalization—coupled with the success of such platform-centric business models in software—is pushing some manufacturers to rethink products as physical platforms, with each platform the center of an ecosystem in which third-party partners build modular add-ons. This change goes beyond simply adding software to physical objects, though that is an important component of platform creation. The design of physical products is changing to allow for extensive personalization and customization, and to encourage offerings from third-party partners that increase the value of the base product. We most often think of “platforms" in terms of software, with the most recent example being the massive success of the iOS and Android app platforms. These platforms use a leveraged growth model that relies on simple mathematics: The greater the reach and value of the extensions created, the greater the number of base-module sales. However, platforms can also exist outside the digital world. A platform is any environment with set standards and governance models that facilitate third-party participation and interactions. Successful platforms increase the speed and lower the cost of innovation, as they reduce entry costs and risks through common interfaces and plug-in architectures. Participants can join in and collaborate, extending the platform’s functionality. The more participants a platform has, the richer its feedback loops and the greater the system’s learning and performance improvements. Aftermarket add-ons—one example of a physical platform—have a long history. Thriving aftermarkets exist to customize and personalize automobiles for both utility and aesthetics, for example. Most aftermarket products are manufactured and installed by third parties that have no affiliation with the original equipment manufacturers.16 What is new is the upsurge of products designed from the start as bases for third-party extensions from partners and others. The aftermarket has become a premarket. The view of products as platforms—as starting points for customization and personalization—has been embraced by the maker movement. In the world of furniture, for example, IKEA product lines have been further extended by consumers who “hack" off-the-shelf furniture, posting photos and instructions on Ikeahackers.net.17 Similarly, at Mykea (thisismykea.com), artists can submit designs to “reskin" standard Ikea furniture.18 In other product-as-platform plays, chip manufacturers Intel and AMD have had to compete with cheaper, smaller electronics platforms such as Arduino and Raspberry Pi. These platforms’ successes are directly and intentionally tied to that of the extensions that consumers build on them. Forward-thinking product manufacturers are approaching such movements, not as fringe activities, or even as threats to the brand, but as marketing opportunities—a chance to embrace a passionate, highly invested community, offering opportunities for engagement and loyalty in products designed and manufactured for hackability. They are extending the concept of the product as platform into an explicit business strategy: Introduce a product platform, then invite multiple third parties to create modular add-ons that extend the value to the customer. MIT’s annual Vehicle Design Summit 10^5 competition, launched in 2013, invites 10 teams to develop automobiles standardized around five subsystems—auxiliary power unit (APU)/fuel, body, dashboard, suspension, and chassis—creating 100,000 permutations.19 And in the for-profit world, Google’s Project Ara will soon launch a modular smartphone, inviting third-party manufacturers to build niche-targeted swappable modules that fit into nine compartments in the Ara shell. A user might extend battery life with an extra battery one day, then switch out the camera for a night-vision module the next. Planned modules include chargers and connectors, screens, cameras, speakers, storage, and medical devices such as blood glucose monitors and electrocardiographs.20 If we can endlessly customize our apps, why not the physical components of our phones?Intel Edison: Making a platform play for IoT It’s common knowledge that Intel missed the mark on mobile. For decades, the company led sales of PC processors; then, with the rise of mobile phones, ARM Holdings took the lead position in chip design and licensing by specializing in low-cost, low-power processor technology, while Qualcomm and Samsung dominated manufacturing. As long-time Intel executive Andy Bryant put it, “We’re paying a price for that right now."21 Despite Intel’s many attempts to catch up, including paying subsidies to push its presence in tablets, its mobile business continues to struggle. Most recently, the division posted a billion-dollar operating loss in Q3 2014.22 Determined to catch the next wave, the company has invested significantly in making chips for the Internet of Things. (IDC forecasts the existence of more than 30 billion smart devices by 2020, comprising a $3 trillion market.23) The result is impressive: In Q3 2014, Intel’s IoT chips brought in $530 million in revenue, up 14 percent year over year.24 Then, at the 2014 Consumer Electronics Show, the company announced Edison—a low-cost, product-ready development platform designed for use in wearables, robotics, and IoT. The chip quickly gained popularity among makers for its versatility and high performance. In 2015, Intel followed up with the release of Curie, a button-sized module designed for easy integration into wearable technologies. Unlike with the PC wave, when Intel locked in a few big partners, this time the chipmaker is allying with a wide range of smaller players. To inspire individuals and small teams to get started with Edison, and to make connections in the maker community, it has established an ecosystem designed to lower barriers to entry, putting out resources from hacker kits to user guidebooks and establishing a strong presence at events. Through the Make It Wearable Challenge, Intel is helping startups transition from idea to product; in the most recent competition, teams from all over the world came up with ideas and prototypes incorporating the Edison chip, including flyable and wearable cameras, low-cost robotic hands, and sensors for use in skiing. For Intel, the move into the IoT market is smart business. For makers and manufacturing entrants, it’s the base for an outpouring of innovative products.From product to serviceWhere does the product end and the service begin? In one sense, this is an old question; business strategists have long advised companies to focus on the problem solved rather than on the product that solves it. Today, however, the expanding digital infrastructure—low-cost computing and digital storage, ubiquitous connectivity, and a multiplying number of connected devices—has created many more opportunities to fundamentally rethink the product as a service. This trend is most evident where the “product" is virtual, with Adobe, Autodesk, and Microsoft offering software suites via monthly subscription. At the same time, in the enterprise software market, onsite IT hardware and software is being eclipsed by cloud-based software-as-a-service (SaaS) offerings. Opportunities to reconceptualize physical products as services are growing as well. For instance, digital infrastructure has spurred the “sharing economy"—a broad term used to describe businesses that commoditize sharing of underutilized goods and services. By moving the focus from ownership to access (collaborative consumption), this model shifts the economics of usage from product to service, giving rise to billion-dollar companies including Uber (crowdsourced transportation) and Airbnb (crowdsourced housing). Lesser-known startups have arisen to share tools, kitchen appliances, and other rarely used or underutilized products. The value created by sharing these goods is not, for the most part, being captured by product manufacturers. There is a largely untapped opportunity for manufacturers to reconfigure their own business models, reenvisioning the nature of their products in a way that helps them take advantage of the product-as-a-service concept. General Electric is a notable example of a company that has successfully navigated the shift from ownership to access. GE Aviation has recently taken steps to pursue a product-as-a-service business strategy for one of its major offerings. Along with Rolls-Royce and Pratt & Whitney, the GE division manufactures aircraft engines for a market of buyers led by Boeing and Airbus. These engines, which cost $20–30 million each, have long, complex sales cycles and relatively low margins.25 Not surprisingly, more money is made servicing this equipment over its 30-year lifespan than on the initial sale. With this in mind, GE has introduced a “Power by the Hour" program that shifts from sales and service to a utility model. The idea—and the term, coined by Bristol Siddeley in the 1960s—has since been used by other engine manufacturers, including Rolls-Royce and Pratt & Whitney. In GE’s offering, after an initial setup cost, the customer pays for time used rather than equipment or service—moving from a large fixed cost to a variable cost aligned with usage. In such a scenario, the advantages to both company and customer are many. Sensors on the new engines generate real-time usage, diagnostic, and failure data. Together with a specialist team that will fly around the globe to address issues, this setup has reduced unscheduled downtime significantly.26 More accurate data also helps the company improve both products and scheduling, reducing overall costs for both parties. Of course, this model isn’t unique to the jet engine market. In the consumer market, for instance, instead of selling manufactured solar panels, providers such as Solar City offer customers fixed utility pricing while financing the initial cost of products and installation. The story with such providers is one of both large and small competitors coming into multiple markets with a service-driven model, capturing value that manufacturers once claimed as their own. The manufacturers that respond with a new lens on products and services are those that will continue to thrive. As products become “smart," connected, co-created, and even transformed into services, the whole notion of creating value solely by making and selling more items becomes obsolete. With the change in the nature of products comes a shift in value creation. In the coming landscape, value will come from connectivity, data, collaboration, feedback loops, and learning—all of which can lay the groundwork for new and more powerful business models.The changing economics of production Manufacturing, until recently, was a daunting space with relatively few players. Barriers to entry were high and initial capital investments hefty; products had to navigate multiple intermediaries before reaching the consumer. Today, however, huge shifts in technology and public policy have eroded barriers that once impeded the flow of information, resources, and products. In a world where computing costs are plummeting, connectivity is becoming ubiquitous, and information flows freely, previously cost-prohibitive tasks and business models are becoming more available to more players. Barriers to entry, commercialization, and learning are eroding, as is the value proposition for traditional intermediaries in the supply chain. Meanwhile, rapid advances and convergences in technology, including additive manufacturing, robotics, and materials science, further expand what can be manufactured and how. All of these developments are combining with changing demand patterns to increase market fragmentation, supporting a proliferation of product makers further down the value chain with more direct consumer contact. Upstream, larger manufacturers will likely consolidate, taking advantage of scale to provide components and platforms used by smaller players.Exponential technologiesOne of the most well-known ideas about digital technology is Moore’s law, which describes the doubling of computer processing speed every 18 to 24 months for the past 50 years.27 Modern computers continue to become exponentially smaller, faster, and cheaper. And as more and more technologies become digitally empowered, this pattern of growth has expanded beyond microprocessors. Emerging fields with potential for exponential growth include additive manufacturing, robotics, and materials science. The convergence of these and other technologies has the potential to generate huge improvements in capability, utility, and accessibility.Additive manufacturingAdditive manufacturing (AM), better known as 3D printing, encompasses manufacturing technologies that create objects by addition rather than subtraction (through milling, for example). While 3D printing technologies were developed more than 30 years ago, this decade has seen a rapid advancement in tools, techniques, and applications in both commercial and consumer arenas. Today, while additive manufacturing is used mostly in prototyping,28 it is expanding to other stages in the manufacturing process. Tooling—the production of molds, patterns, jigs, and fixtures—is traditionally one of the most time-consuming and costly portions of the process, far outweighing unit costs for each additional part, and leading manufacturers to spread out the up-front cost across large production runs. In contrast, the initial capital outlay for AM is typically much lower, not only because AM obviates the need for tooling, but also because the cost of AM equipment has been decreasing rapidly. The price of additive manufacturing is dropping, making AM increasingly competitive with conventional manufacturing due to differences in fixed vs. variable costs. Even though the variable cost for AM is currently higher than that for conventional manufacturing, reduced up-front investment often makes the total cost of AM less for small production runs (see figure 2). All of this can make AM a game-changing option for small-batch production. In addition, complexity is free with additive manufacturing—in fact, the material cost of printing a complex design is less than that of printing a solid block, since it requires less time and material.29 When the burden of production is transferred from the physical world to the digital world, engineers can design intricate, previously unproducable shapes. And manufacturers can produce stronger, more lightweight parts that require less assembly time, reducing the overall cost of production or increasing the value of the final product.30 While AM technology is still developing in terms of speed, material, and precision, many industries are already using it to create high-value parts at low volume. In coming years, we can expect the range and scale of AM deployments to extend to lower-value, high-volume items.RoboticsIndustrial robots have historically been used mostly for tasks requiring exceptional strength and precision—for example, moving heavy items, welding, and semiconductor fabrication. They required heavy up-front investment and programming, and were usually bolted to the ground and caged as a safety measure for humans working in the vicinity. Use of industrial robots was therefore limited to large-scale manufacturing. Until recently, low labor costs plus the high price of industrial robots posed little incentive for low-wage countries to invest in automation, particularly for tasks that require relatively little training and lines of production that change frequently. Now, however, rising global labor costs and a new generation of cheaper, more capable, more flexible robots are changing the equation. The minimum wage in the Shenzhen area of southern China has risen by 64 percent in the past four years. Some analysts estimate that, by 2019, per-hour labor costs in China will be 177 percent of those in Vietnam and 218 percent of those in India.31 Given such projections, it’s unsurprising that industrial robot sales in China grew by nearly 60 percent in 2013.32 In 2014, China became the largest buyer of industrial robots, buying more than 36,000—more than either the United States or Japan. While Japan still has the largest total number of active robots, China is well on pace to become the automation capital of the world.33 The rapidly falling cost of more capable robots is a complementary factor. Unlike industrial robots of the past, “Baxter," the $22,000 general-purpose robot developed by Rodney Brooks at Rethink Robotics, can work safely alongside humans. It replaces programming with simple path guidance, allowing it to be retrained for another task simply by moving its arms to mirror the new path. Brooks’ creation signals yet another shift in workforce composition, freeing unskilled labor from repetitive tasks once too expensive to automate while further enabling the use and expansion of “cobots"—robots that work directly and collaboratively with human beings.34 OtherLab is developing “soft robots" that use pneumatic instead of mechanical power, reducing energy requirements and increasing safety while matching the dexterity and accuracy of existing mid-grade industrial robots. Though robots will not replace human labor in manufacturing in the immediate future, they are poised to take on an increasing share of the manufacturing floor. This is likely to reduce the number of low-wage, low-skill human manufacturing jobs while generating a relatively small number of specialized higher-wage jobs in programming and maintenance.Materials scienceSince the 1960s, the term “space-age" has been used to describe new materials that enable previously impossible engineering tasks. The first generation of these materials—memory foam, carbon fiber, nanomaterials, optical coatings—has become ubiquitous. As new materials are created, older ones, once inaccessible to all but the most advanced, price-insensitive manufacturers, have begun to trickle down to the mainstream. Take carbon fiber, the poster child of space-age materials. While the energy costs associated with its manufacture still prevent use in many low-end applications, recent technological improvements have allowed manufacturers to produce higher volumes of carbon fiber products at lower prices. As a result, it has found utility in a slew of premium products such as bicycles, camera tripods, and even structural automotive components such as drive shafts and A-pillars.35 Lexus, for example, has developed a carbon fiber loom that, rather than forming two-dimensional sheets into three-dimensional shapes, can weave seamless three-dimensional objects.36 As manufacturing improvements lower costs and other barriers to access, we can expect to see such materials used in more mainstream applications. For example, Oak Ridge Labs has realized a 35 percent reduction in carbon fiber costs, and BMW plans to bring the cost of carbon fiber production down by 90 percent.37 In fact, lower costs and streamlined manufacturing processes are slated to double global carbon fiber production by 2020.38 The effects of such gains extend far beyond making it cheaper to manufacture high-tech items. Battery technology, for example, has seen dramatic performance improvements over the past decade as a result of materials science innovations. It has been predicted that advancements in chemistry and materials science will result in an 8 to 9 percent annual increase in the energy density of batteries.39 Other nascent technologies have the potential to vault past the capabilities of commonly used materials—even the first generations of space-age materials—by orders of magnitude. Carbon nanotubes, for example, have one of the highest tensile strengths of any material while serving as one of the best conductors of both heat and electricity.40 They can carry four times more energy than copper while retaining the physical characteristics of a piece of thread.41 Researchers have envisioned applications including composite materials stronger than carbon fiber, advanced water filters, syringes that can inject genetic information into cells, solar panels, and artificial muscle fibers.42 Meanwhile, materials are being developed from new sources. MycoBond offers a flame-resistant Styrofoam alternative grown from Mycelium fungus.43 Hobbyists can now make thermoplastic at home using simple online instructions and the starch from a grocery store potato.44 And researchers are making surgical-grade plastic from silk.45 Like carbon nanotubes, these materials have potential in higher-performance settings. Nanocrystalline cellulose, a renewable material abundant in wood fiber, has potential applications ranging from plastic and concrete reinforcement to conductive paper, batteries, electronics displays, and computer memory.46 Other high-performance materials adapt to their environments. Dynamic materials such as electroactive polymers (polymers that change shape when exposed to an electric charge) and thermal bimetals (metals that change shape as temperatures change) have demonstrated potential for use in adaptable architecture. When used as the outer skin of a building, these materials can expand when it is hot to cool structures, and close when it is cold to preserve heat. Dynamic materials have also demonstrated value in more personal applications. The Phorm iPhone® case by Tactus uses electronically controlled fluids to create physical key guides on top of an existing iPad® or iPhone keyboard, giving the user a tactile keyboard or a flat, uninterrupted screen as the situation demands.47 As these materials develop, we can expect to see more physical objects reacting dynamically to suit our needs across contexts. While not everyone will have immediate access to newly developed materials, the barriers to entry for advanced, customized manufacturing will be reduced as advancements in materials science progress—opening up space for new players in cutting-edge manufacturing.Converging technologies’ impact on manufacturingNo technological development exists in a vacuum. As more and more technologies reach a stage of aggressive growth, they are more likely to intersect, generating growth greater than the sum of their parts. When discussing the impact of converging exponential technologies on the manufacturing landscape, bear in mind that each technology will compound the capabilities of others, enabling previously unforeseeable innovations. For instance, materials science is fueling the expansion of additive manufacturing by increasing the range of printing materials. 3D printing has historically used plastics such as ABS and PLA, but newer machines can print in a wide range of materials, greatly increasing the technology’s reach. Modified PLA filaments impregnated with maple wood, bronze, iron, or ceramic are now available at the consumer level, allowing designers to create objects with characteristics of the chosen material.48 For more technical applications, MarkForged is developing a way to print PLA objects infused with carbon fiber, fiberglass, or Kevlar, making load-bearing 3D-printed objects, some with higher strength-to-weight ratios than aluminum, viable. Christian von Koenigsegg of the Swedish supercar manufacturer Koenigsegg has discussed the utility of this technology in low-volume, high-performance applications such as supercar manufacturing. Other companies have made significant headway in the 3D printing of complex, highly engineered parts—for instance, GE’s titanium jet engine turbine blades. Chinese construction firms are printing five-story cement apartment buildings in Suzhou Industrial Park. Electronics manufacturers can use 3D printers to seamlessly embed electronics in printed housings or, by combining conductive and structural materials in the same device, print intricate electronic circuitry within an object during production.49 3D printers have also found use in medicine, printing custom hip replacements that facilitate bone growth—and even recreating human organs using a mix of alginate and human stem cells. Autodesk CEO Carl Bass has spoken extensively on convergence in design software and computing. In this area, Moore’s law has enabled price reductions to a point at which computing power’s incremental cost is functionally zero. This has allowed more people to use advanced modeling capabilities to produce detailed models of any physical object, without having to physically make it. This capability is supplemented by advancements in energy, materials science, nanotechnology, sensors, and robotics, which in turn allow for development and deployment of even more advanced technologies. The result is an interrelated technological economy in which progress in one industry directly affects progress in another. As more technologies approach an exponential turning point, we can expect to see even more such complex and dynamic relationships, further accelerating the progress of technology as a whole.Eroding barriers to learning, entry, and commercializationOne of the strongest effects of the exponentially developing digital infrastructure is its ability to break down barriers, opening the manufacturing world to newcomers. As knowledge and information are digitized, it’s easier than ever to learn a new skill or connect with experts in any field, to enter a market that once required high investment capital, and to commercialize an opportunity from a product to a business. These benefits, first evident in the digital world, are now reaching physical manufacturing, where they are likely to spur both growth and change.Lower barriers to learningWhat does a Millennial (or at this point, anyone) do to learn something new? Google it. Or, in broader terms, search online. How-to videos on pretty much any topic can be found on YouTube. Websites such as Instructables, Hackster, and Makerzine feature thousands of step-by-step projects in text and video. Discussion forums in communities of interest deepen learning with conversations—often mixing amateurs and experts—that address specific problems. Such online discourse is then extended to “real life" via tools, like Meetup, that make it easy to gather a group around a topic or “learning/hacking" session. Communities form around institutions such as TechShops and Fab Labs or events such as MakerFaire, MakerCon, SOLID, and the Open Hardware Summit, all of which include hands-on learning sessions. In short, the transfer of tacit knowledge—knowledge gained by doing—has become easier with the ready availability of both online and real-world events, each of which enhances the other. The resulting influx of makers and startups drawn from these communities, and the ease of acquiring design and production skills, fuels the number of market entrants. While entrants are unequipped to challenge incumbents directly, they are both the sign and the result of rapid innovation; the areas where they innovate will be loci of change and growth in the nature of manufacturing. Note that barriers to learning have come down not just around design and production, but throughout the manufacturing-to-sales process. From desktop tooling to freelance engineering talent, crowdfunding to business incubators, a whole ecosystem has arisen to help budding manufacturers learn the ways of designing, manufacturing, and selling a product.Lower barriers to entryThe digital infrastructure-based benefits that supported the rise of software startups at the turn of the century have now extended to hardware startups. In addition to pay-per-use models that allow for access to high-end computing power through offerings such as Amazon’s AWS service, an array of boutique agencies, freelance creative and technical consultants, and service marketplaces give prospective hardware entrepreneurs access to programming, design, and engineering talent on an as-needed basis. At the low end, sites such as Fiver.com offer ad hoc services for as little as $5 an hour. And support for small providers of first services, and now products, is growing rapidly. Coworking spaces such as Hub and Citizenspace provide shared office space and ancillary support, reducing the initial investment and effort needed to launch a business. Both tooling technology and tool access have also been democratized. TechShop offers members access to complex design and tooling equipment for roughly the cost of a monthly gym membership. A slew of desktop manufacturing modules, from 3D printers and CNC milling machines to printed circuit board (PCB) printers and pick-and-place machines, has hastened the speed of prototyping and small-scale manufacturing (see figure 3). As former Wired editor and 3D Robotics founder Chris Anderson put it, “Three guys with a laptop used to describe a Web startup. Now it can describe a hardware startup as well."50 Lower barriers to commercializationBarriers to initial funding and commercialization are also falling, making it easier than ever to enter a market, commercialize a creation, and build a business. Crowdfunding of hardware projects has become both popular and lucrative, reducing reliance on financing through bank loans and venture capital. Initial capital often covers tooling costs, requiring only enough revenue to cover production. Crowdfunding sites such as Kickstarter and Indiegogo have also allowed startups to identify early adopters, develop a loyal customer base, and establish demand prior to producing a single item. Venture funders have taken notice, increasing their funding of hardware startups, while a slew of hardware incubators and accelerators help startups move from idea to prototype to business. Traditional large-scale manufacturers are playing a role here as well. In early 2015, FirstBuild, the GE subsidiary, launched its first crowdfunding campaign on Indiegogo for the Paragon Induction Cooktop, a Bluetooth-enabled tabletop cooker—and the test case for the company’s new manufacturing model. And in 2014, Foxconn, the world’s largest contract manufacturer, sectioned off a portion of one of its factories to house Innoconn, a startup incubator and microfactory targeting initial product runs of 1,000 to 10,000—a dramatic shift for a firm once accessible only to blue-chip brands with multimillion-unit orders.51 While Innoconn represents only a tiny fraction of Foxconn’s total production volume, it demonstrates the willingness of even the largest firms to learn small-batch manufacturing and support the growing small-company segment of the manufacturing landscape. By appropriating formerly small-scale funding and production practices like crowdfunding and small-batch manufacturing, big manufacturers can reap the benefits of both their size and the new methods’ agility.PCH: From product concept to product delivery—a platform for hardware entrepreneurs and startups Launched in 1996 as a one-man sourcing operation for computer parts, PCH is now a billion-dollar firm employing more than 2,800 people across the globe.52 The company spans the supply chain, designing custom manufacturing solutions for Fortune 500 companies as well as startups. From design manufacturing and engineering to packaging and fulfillment to logistics and distribution, PCH offers a variety of services to the hardware industry. In addition to manufacturing, fulfillment, and postponing facilities in Shenzhen, PCH works with a network of factories. In the past few years, PCH has added, through acquisition or organic growth, a hardware accelerator (Highway1), a division to help startups scale (PCH Access), an engineering and design division (PCH Lime Labs), an e-commerce platform (Fab.com), and distribution and fulfillment capacity (TNS).53 Reflecting its mission to help startups and incumbents make products and get to market, the company recently rebranded operations under the slogan “PCH: We make" and the tagline “If it can be imagined, it can be made." A recent “Demo Day" showcased the range of hardware startups PCH supports, including a company pitching smartphone-controlled haptic wearables, a connected water pump for home usage monitoring, a heads-up display for car navigation and connectivity, and smart jewelry. While a growing number of accelerators help entrepreneurs and startups navigate the value chain, PCH is emerging as one of the first to do so from concept to delivery, lowering barriers to entry and increasing speed to market. PCH founder Liam Casey notes, “Time is the number-one currency in this business."54 His network delivers a boost that can make the difference between success and failure or, at a minimum, provide a crucial understanding of how to scale. For the current Goliaths of consumer electronics, it is the slingshot that could empower a thousand Davids.Emerging manufacturing modelsResponding to the growing opportunities presented by niche markets, and drawing on technologies that make it possible to cost-effectively manufacture small batches or even single instances of many items, manufacturing is shifting from a predominantly scale-driven operation to a sector characterized by multiple production models. Large-scale production will always dominate some segments of the value chain, but three other manufacturing models are arising to take advantage of new opportunities: distributed smaller-scale local manufacturing, loosely coupled manufacturing ecosystems (like that in Shenzhen, China), and an increased focus on agile manufacturing methods at larger operations. While each of these models reduces costs, they also reimagine and restructure how products are made, with a deep long-term effect on value creation. The emergence of business models centered on niche markets and smaller-scale production makes it easier for new entrants to establish themselves, attract customers—and potentially eat into the mass markets traditionally served by large-scale manufacturers, on whose platforms they may very well rely.Distributed local manufacturingIn the twentieth century, an intense focus on cost reduction and efficiency led manufacturers to decamp to countries with low labor costs and to maximize efficiencies gained through mass production. In the United States and Europe, what little domestic manufacturing remained served premium or craft markets. But a recent rise in local manufacturing is bucking that trend, relying on technology and community to keep costs down. Over the last decade, Brooklyn, NY fashion designer Bob Bland experienced the reduction of US apparel manufacturing capacity firsthand—followed by the dwindling of the value chain from raw materials to machinery, the tacit knowledge of the community that supported it, and the opportunity to connect customers’ wants and needs with what gets produced. In 2014, to help reverse this trend, Bland founded Manufacture New York, a sprawling 160,000-square-foot fashion design and production center in Sunset Park, Brooklyn. Her aim: to enable more small manufacturers to subsist locally and be more responsive to local needs. AtFAB, a design firm cofounded by architects Anne Filson and Gary Rohrbacher, aims to design simple, durable furniture that can be produced locally using digital CNC fabrication tools. Filson and Rohrbacher design and test furniture in their studio, then post the digital files on OpenDesk, “a global platform for open making," for others to download, customize, and cut using CNC machines.55 OpenDesk has connected a community of designers, local machine shops, and users to drive momentum for the distributed manufacturing movement; its goal is to reduce the environmental impact of shipping, increase local employment, and provide consumers with customizable designer furniture for a fraction of the retail price.56 To support the makers who buy and use designs like AtFAB’s, community organizations such as 100KGarages.com are building local capacity for digital fabrication while educating members, building community—and extending the value of digital platforms such as OpenDesk. The digitization of manufacturing, along with the exponential growth of subtractive and additive digital fabrication technologies and robotics, has made manufacturing more repeatable and portable. Individual designers and small businesses now have the ability to produce high-quality goods locally at low cost. Increased digitization is likely to further lower the cost of customization, giving more advantage to distributed small-scale local manufacturing that captures consumer needs.Local Motors: Proof of concept for distributed local micro manufacturing In September 2014, at the International Manufacturing Technology Show (IMTS), a car was 3D printed live for the first time. The Local Motors Strati, based on a contest-winning design by Michele Anoe, took 44 hours to print, another day to CNC mill the body to its final shape, and two more days to assemble additional components.57 The Strati combines new (community-driven, micro manufacturing) business models with new (3D printing) technology to reimagine the nature and process of auto manufacturing. In summer 2015, Local Motors will put the results into practice, opening a combination micro manufacturing facility and retail outlet dedicated to designing, printing, and selling the Strati. In doing so, it will embody a workable example of distributed local micro manufacturing—and stand as a harbinger of change for manufacturing of even large, complex, and heavily regulated products. In just eight years, Local Motors has upended conventional thinking about what can be manufactured and how. Founded in 2007 by Jay Rogers, the company has created a set of tightly integrated physical and virtual platforms where a community of designers, makers, and engineers come together to design, build, and sell vehicles.58 With its first product, the Rally Fighter, a street-legal off-road automobile, Local Motors redesigned the manufacturing process to work without a steel press, instead building a metal frame and attaching composite body components. This led to a much less capital-intensive process that enabled small-scale distributed manufacturing. The Rally Fighter is sold as a kit car to overcome US regulatory hurdles. Loosely coupled manufacturing ecosystemsShenzhen, a city in southern China, was established in 1979; today, it is the anchor city of China’s Special Economic Zone, the global epicenter of consumer goods manufacturing.59While the zone’s largest manufacturers are known worldwide, some of the more interesting players in this ecosystem are part of a network of smaller factories, called Shanzhai, that evolved around the giants, originally manufacturing gray-market or pirated products but now entering legitimate commerce. These smaller manufacturers’ size, plus their network of interconnections, enable them to perfect small-lot manufacturing while iterating at incredible speed. Their operators—many former factory workers who have branched out into ownership—have mastered the ability to build high-quality products at low volumes and low cost, at extreme speed, using an ecosystem of loosely coupled small to medium-sized factories and individual experts. The result is a system that can take on the larger Shenzhen factories—and one that is extremely well suited to emerging modes of supply. The beneficiaries are any designers or brands, large or small, established or new, that want to jump in, iterate quickly and cheaply, and scale as needed to meet demand. Over the last two decades, Shenzhen, which the Huffington Post has dubbed “Silicon Valley for hardware," has drawn expert engineering and manufacturing talent.60 Those who left the zone’s large manufacturers to set up small factories started working together, building a loose but powerful network of knowledge, skills, and capabilities—and creating a near-ideal environment for constant learning. New demands led to new tools and techniques, with network members working together to push the boundaries of capability and cost. One highly visible result is the plethora of inexpensive, high-quality mobile phones dominating the Chinese market. As newer trends such as IoT, wearables, and robotics gain momentum, the Shanzhai are likely to respond with equal alacrity and range. The geographic density of Shenzhen, and its ability to encompass the entire value chain from raw material suppliers and industrial equipment manufacturers to designers, product manufacturers, and assemblers, is unlikely to be replicated exactly. However, similar hubs have appeared elsewhere in China, with footwear manufacturing in the Fujian region and motorcycle manufacturing around Chongqing. Other, more traditional global manufacturing hubs have the potential to spawn similar loosely coupled networks, mirroring the Shanzhai’s system and success.Shanzhai: Extending the value of Solowheel Inventor Shane Chen emigrated from China to the United States in the 1980s, attracted by the American culture of entrepreneurship. In 2012, he introduced the Solowheel, a self-balancing electric unicycle with a starting price of $1,599—a price that made it difficult to move beyond the Western eary-adopter audience.61 While the creativity of the Solowheel is notable, an equally interesting—and more far-reaching—story can be found in the response of the loosely coupled manufacturing ecosystem of Shenzhen, China. Within a few months of the Solowheel’s US introduction, multiple knockoffs, and—more interestingly—dozens of variants of the Solowheel appeared on Chinese e-commerce sites. Most were produced by factories in Shenzhen. There were Solowheel-like products with two wheels, ones with seats, others with holders for tablets (to aid in navigation). Prices ranged from $200 to $800.62 On a recent trip to China, the authors of this report visited one Shenzhen factory, Shenzhen Teamgee Electronic Co., or STEC, that manufactures the motorized unicycles. The factory owner had come across the Solowheel on a trip to the United States, and was intrigued by its potential as a last-mile transportation device for the Chinese market. He reached out to “brother factories" in his network, and together they reverse-engineered and reproduced the product. One factory did the battery system, another the motor; STEC handled the plastic molding and electronics. Within a month, the factory network had a product ready for market. Six months later, it was selling the third-generation product. Beyond the impressive speed of iteration was the even more striking ability to improve performance while continuing to cut costs with each cycle. The most recent version, the TG T3, retails at $229. A fourth generation is in the design phase now—the embodiment of a system honed at every point to take advantage of the emerging value chain.63Agile manufacturingFor larger manufacturers, renewed interest in agile manufacturing is helping them remain competitive while staying responsive to increasingly fickle and unpredictable market signals. The key to this increased agility: a digital infrastructure that provides access to near-real-time point of sale (POS) data, rather than lagging monthly or quarterly sales reports. The more accurate such forecasts are, the more sense it can make to choose highly efficient large production runs. However, when introducing a new product with less certainty of market acceptance, or making upgrades or changes to a product design, manufacturers may instead choose to focus on producing “minimal viable batch quantities," matching agile manufacturing practices with agility in the supply chain. Overseas production and freight shipping will force minimum manufacturing quantities to compensate for long lead times from production to customer. For smaller items, the cost of air freight and short fulfillment cycles may trump the cost of holding inventory, cost of capital, and obsolescence. Taking all these factors into account, contract manufacturer PCH International demonstrates the benefits of agile manufacturing. In-house tracking technology allows the company to track each order from click to delivery in a single system, “managing to an order of one." PCH can also customize individual orders at the final assembly level. For Neil Young’s high-end music device, the Pono Player, the buyer can choose a product color, select the signature of a favorite artist to be engraved on the casing, and have his or her choice of music preloaded. Beyond using technology to support agility, the company has reengineered its manufacturing lines to be modular—and so easy to update that the minimum viable batch quantity equals the number of products produced on one manufacturing line during a single shift.Seeed Studios: Embracing agile manufacturingAt MakerCon 2014, Seeed Studios CEO Eric Pan stepped onstage wearing a hand exoskeleton for remote robotic control. It was an early prototype of Dexta Robotics’ Dexmo, a 3D-printed exoskeleton combined with inexpensive sensors that could control a robotic device by mirroring the wearer’s hand movements. While commercial robotic control systems cost tens of thousands of dollars, the Dexmo prototype was hacked together for under $100.64 The Dexmo control arm was designed to illustrate the concept of “design from manufacturing"—using readily available components manufactured by the millions to reduce product cost. In this case, the ingenuity lay in replacing expensive bendable sensors with a combination of cheap, easily acquired or manufactured parts. Seeed is among a growing number of companies that have extended the web of manufacturers and sourcing companies from Shenzhen to the broader world. The Shenzhen-based firm was founded as a bridge between Western makers and China’s agile manufacturing ecosystem. In addition to in-house manufacturing facilities, it has developed relationships with a range of specialized manufacturers and component providers. The company emphasizes “design from manufacturing and design for manufacturing," aiming to design with manufacturing specs in mind; its Open Parts Library (OPL) catalogues compatible components for the most widely used parts in printed circuit board (PCB) designs. This allows even novice makers to reduce costs and error rates by specifying mass-produced, highly compatible components. The OPL and connecting to the Shanzhai ecosystem are two of many ways that Seeed Studios has embraced agile manufacturing. The result: increased connection, lower barriers to prototyping, and an overall increase in the pace of product innovation. As technology advances exponentially and barriers to learning, entry, and commercialization continue to decrease, product development and commercialization will further fragment. New entities may find it increasingly easier to enter the landscape and to create products addressing specific consumer niches. These businesses will proliferate, though each will be limited in size by “diseconomies of scale"—the larger they get, the less relevant they will become. Meanwhile, as consumer demand fragments, so will addressable markets, making the notion of “mass market" more and more irrelevant. In this manufacturing environment—with the downstream fragmenting as scale moves upstream—businesses seeking growth will need to rethink the ways they participate in the manufacturing landscape.The changing economics of the value chain The lines between manufacturers (which make things) and retailers (which sell things) are blurring. This softening of roles has significance not just for the companies undergoing a transformation, but also for any intermediaries holding inventory along the way. While a few companies are vertically integrated across the value chain, most traditional manufacturers are a few steps removed from their products’ end consumers. In a world where information travels ever more freely, and where cycle times are collapsing, traditional players can struggle to communicate with consumers and to receive—and act on—timely, meaningful feedback. Consumers feel this disconnect as well, and many are opting to connect more directly with the makers of the products they consume. These disconnects can have multiple implications for how value is created and captured. As the distance between manufacturer and consumer narrows, intermediaries whose sole value is to hold inventory are likely to be squeezed out. The most likely survivors will be those that create more value for consumers, perhaps by providing useful information, helping people make choices, or allowing buyers to experience products in new ways. For the same reasons, successful manufacturers will be those that can engage directly with consumers, narrow the gap between prototype and product, and move their business models from build-to-stock to build-to-order. While no single small company can have a major impact on large incumbents, a slew of agile startups taking market share from the incumbents can create significant change. Entrants are using three approaches to gaining a toehold in the new manufacturing landscape, each at a distinct point in the value chain: engaging the consumer directly, increasing speed from idea to market, and favoring build-to-order over build-to-stock.Warby Parker: Rethinking the value chain Eyewear startup Warby Parker was founded in 2010 by four entrepreneurs who saw a problem with the industry—the high cost of glasses. Explains founder and co-CEO Neil Blumenthal, “We were tired of radically overpaying for eyeglasses. It didn’t make sense to us that a pair should cost as much or more as an iPhone; glasses were invented more than 800 years ago and don’t contain rare minerals or state-of-the-art technology."65 The company hit a nerve. Since its founding, it’s been growing rapidly despite entering an industry almost entirely closed to outsiders; eyewear is dominated by a single player, Luxottica Group, with a stake in almost every part of the supply chain, including manufacturing (Oakley, Ray-Ban), distribution, retail (Sunglass Hut), and even insurance (Eyemed). All told, Luxottica controls 80 percent of all major eyewear brands. As often happens in industries dominated by a single player, market prices have stayed high, with an average 20x markup on each pair of glasses sold.66 Warby Parker’s response was to develop its own vertically integrated model, cutting out most of the licensing fees and middlemen. It sourced frames directly from manufacturers (including those providing competitors’ $700 frames) and kept all product design in house, a practice uncommon in the industry.67 This model allows the company to sell a pair of frames with prescription lenses directly to the consumer, without insurance subsidies, for $95. At the same time, it distributes another pair of glasses to a wearer in the developing world. As of this writing, Warby Parker has sold more than a million pairs of glasses and distributed nearly a million more.68 In line with the incredibly personal nature of glasses—which are both a medical device and a lifestyle item—the company combines the convenience of online ordering with customers’ need to experience the product in person. Customers can select up to five frames and try them out for five days for free. This program appeals to and maintains full control of the distribution network while bypassing the existing brick-and-mortar infrastructure. Recently, Warby Parker has expanded its business model to include brick-and-mortar stores; as of 2015, the company had retail stores in seven cities with showrooms in an additional six, further extending its vertical depth.69Eroding value proposition for intermediariesIn a traditional value chain, the manufactured product goes through a series of wholesalers, distributors, and retailers before reaching the consumer. Inventory is held at each of these intermediary stops to buffer for variable demand. Capital is held hostage for a few months, tied up in shipping and inventory until products are sold. It’s no surprise that the manufacturer’s suggested retail price (MSRP) is usually four to five times the ex-factory cost of a product: A lot of money (and, traditionally, value) is stuck in intermediaries. But as the digital infrastructure continues to cut the distance between manufacturer and consumer, this model, and its conception of value, will most likely be questioned and restructured. When search cost was high, a retail outlet providing multiple side-by-side options had value. Convenience also dictated having as many items as possible available in one location. But then online sales brought consumers not just a near-infinite number of options, but reviews and feedback that helped buyers choose among them. Meanwhile, quick (even overnight or same-day) shipping has become cost-effective when substituted for the cost of multiple intermediaries. While choice and convenience alone may not be adequate value drivers for intermediaries, in this time of transition, as consumers are retrained in new behaviors (online purchasing and ship-to-door), retailers’ traditional sources of power, geographic spread and physical shelf space, are slowly slipping. In this environment, many hardware startups are forgoing traditional brick-and-mortar retail channels, going directly to consumers via online platforms, such as Amazon, eBay, and Etsy, that offer advantages to both buyers and sellers. While getting on the shelves of a brick-and-mortar retailer can boost sales, it can also create a cash crunch when most of a small firm’s revenue is stuck in inventory or held hostage to long payment terms. As the value captured by controlling access to physical space and consumer access erodes, retailers that want to stay relevant as value chain players will have to reevaluate and reconfigure their business models. Eyeglass manufacturer Warby Parker, for example, has been growing at a rapid pace in an industry historically closed to outsiders, largely due to its ability to bypass traditional distribution and retail channels. As a result, the company is able to offer high-quality frames at lower prices, unlocking value otherwise taken up by intermediaries.Direct consumer engagementTraditionally, the consumer has been a few steps removed from the product manufacturer. Today’s hardware startups, however, are using the digital infrastructure to connect directly with the consumer, building affinity for both product and company. As technology evolution accelerates, they focus on brand affinity rather than traditional intellectual property (IP) patent filings and protection. While consumer engagement is not usually seen as part of the supply chain, it is testament to the power of direct engagement that it can be redefined as a very early point in that chain—which may today be more aptly called the value chain. Many of these startups are using crowdfunding platforms not only to raise initial capital, but to build a community of fans and supporters around their products—engaging demand in a way that ties it inextricably to supply. In shifting the power balance for market entrants, this stance strikes at the heart of the question of how to capture value, and which entities (new entrants or incumbents, small businesses or large) will do so. In crowdfunding campaigns, consumer engagement does not end with the campaign; rather, businesses continue to connect and communicate with supporters throughout the manufacturing process, offering detailed updates on both successes and challenges. The Pebble E-Paper Smartwatch, an early entrant into the smartwatch market in 2012, was one of the earliest crowdfunded hardware successes. After failing to raise money from venture capital firms, founder Eric Migicovsky was looking for $100,000 to move from prototype to manufacture. After raising $10,266,845 from 68,929 backers, Pebble stopped its crowdfunding campaign early for fear of not being able to fulfill all of its orders.70 Despite being heavily funded, the company ran into manufacturing problems, due to everything from adhesives that performed badly in Shenzhen’s humid climate to universal work stoppage for Chinese New Year. Though product delivery was delayed by several months, Migicovsky kept the crowdfunding community in the loop, offering detailed reports including play-by-plays on manufacturing fumbles. Community members were extremely supportive, even suggesting potential solutions and recommending specification upgrades, several of which were incorporated into the product. In the end, a highly engaged, loyal community and customer base helped the Pebble gain market traction where other, larger firms had failed.Faster speed to commercializationWhile small manufacturers such as Pebble embrace a measured pace of development informed by community engagement, larger players are more likely to distinguish themselves through speed. And with ever more rapid shifts in consumer demand, speed to market is increasingly important. “Fast fashion" sellers such as TopShop, for example, credit their success in large part to optimizing manufacturing and the value chain to address changes in consumer tastes and demands. With the success of such models, manufacturers have inevitably followed suit, working to compress time from idea to market. One major draw of manufacturing consumer electronics in Shenzhen is “Shenzhen sudu" (Shenzhen speed), which allows sellers to capture market value almost as fast as it can be identified.71 For the Solowheel (described previously), this resulted in development of dozens of lower-priced substitutes only weeks after the initial product was released. Today, such rapid speed to commercialization is poised to become the rule rather than the exception.Build to order vs. build to stockTraditional manufacturing practices are still built around a “build to stock" model—demand is forecast, and then the product is manufactured to fit that forecast, taking into account multiple lead times along the value chain. But with the ability to engage the consumer directly online come new “build-to-order" models driven by online promotion and preorders. In many respects, crowdfunding for new products is a kind of preorder. While build-to-order manufacturers may still use forecasting to optimize manufacturing efficiency, preorders are even better at gauging consumer demand. San Francisco clothing startup BetaBrand, for example, designs and releases a few limited-edition designs every week for preorder. This structure reduces the risk of excess inventory and gives the company constant demand data. Threadless, another clothing startup, hosts a platform on which designers can submit designs for users to vote on. Users can preorder T-shirts, hoodies, posters, or card packs printed with the winners. Threadless then produces the items, paying designers a royalty. As consumer preferences shift toward personalization, customization, and creation, direct access to consumers will become critical. Intermediaries reduce speed to market and require capital to build up inventory; they can also make it more difficult for manufacturers to access valuable consumer insights. However, many large manufacturers today rely heavily on intermediaries, weakening their connection to the consumer. This puts them at a disadvantage when compared to smaller players with direct consumer relationships that make them more responsive to changing consumer needs. Large manufacturers should consider how they might use their scale to enable these smaller players instead of competing with them directly.Xiaomi: Succeeding with adaptivity and responsiveness Singles’ Day, held on November 11, is China’s equivalent of Cyber Monday (the 1s in the date, likened to “bare sticks," represent unmarried people). In the five years since the holiday’s introduction by massive e-retailer Alibaba (which logged $9.3 billion in sales on Singles’ Day 2014), November 11 has become the world’s biggest online shopping day. 2014’s best-selling product, the Mi, is the creation of smartphone manufacturer Xiaomi, which sold 1.16 million Mi phones in 24 hours, totaling $254 million in sales. The four-year-old company is now the world’s third-largest smartphone manufacturer, trailing only Apple and Samsung.72 Xiaomi launched in 2010, starting with software—the Android-based operating system MIUI—long before it entered the hardware market. The company prides itself on its ongoing weekly operating system updates; at the time of writing, MIUI had been updated every Friday for more than four years.73 This extreme adaptivity and responsiveness to user feedback quickly attracted a dedicated fan base; by the time the first Xiaomi smartphone was released in August 2011, MIUI had accumulated 2.5 million users—including overseas fans who voluntarily translated the platform into 20-plus languages.74 Though today the main draw of the company is arguably its hardware, the OS is still an important pillar in the Xiaomi ecosystem. As the company’s history shows, Xiaomi’s founders never saw it as just a hardware company. In 2011, cofounder Lei Jun described the shift in market competition: “Competition used to be a marathon; you only needed to know how to run. Now the game is an Ironman triathlon. To compete, a company must offer great hardware, software, and Internet services."75 With hardware manufacturing, Xiaomi has put significant energy into both community engagement and fast iterations. Product managers spend approximately half their time in user forums, and the company can incorporate user suggestions in a matter of weeks. Today, Xiaomi ships a new batch of phones every week, and “every batch is incrementally better," says VP of international sales Hugo Barra.76 And active consumer engagement has allowed the company to spend very little on PR and marketing, especially in its early days. Instead, it spends on online and off-line events, including an annual Mi fan festival. Rather than pursuing traditional distribution and retail, Xiaomi generates 70 percent of its sales online, driving demand from fans, who often preorder or participate in flash sales to get their hands on new products.77 This huge preorder demand allows the company to build to order, purchasing components only after orders are placed and eliminating risks associated with surplus raw material and warehousing. Still, given retail prices that cut very close to manufacturing costs, there is quite a bit of speculation about the exact source of Xiaomi’s profitability. One sign of ongoing growth is its successful entry into additional hardware categories, including tablets, routers, and televisions—all of which have benefited from the company’s quick turns and dedication to its customers.Navigating the future manufacturing landscapeThe world of manufacturing is shifting exponentially. Not only is it becoming more difficult to create value, but those who do so are not necessarily those best positioned to capture it. Value resides not just in manufactured products, but also in the information and experiences that those projects facilitate. For example, today’s televisions, despite being many times more powerful than those of just a decade ago, are priced so competitively that neither manufacturers nor retailers can maintain anything more than the smallest margin on their sales. Rather than delivering value in their own right, televisions have become a vehicle for the locus of value—the content that viewers watch on them. With this fundamental shift in value from object to experience—or more specifically, from device to the experience facilitated by that device—comes the need for manufacturers to redefine their roles, and hence their business models. The same trends that have pushed manufacturing in the direction of delivering more value for lower cost—and that have made it about far more than producing physical products—will become more and more pronounced over the next few decades. To succeed, products will have to be smarter, more personalized, more responsive, more connected, and less expensive. Manufacturers will face increasingly complex and costly decisions about where and how to invest in order to add value. When assessing the future manufacturing landscape, there is neither a single playbook for incumbents nor a single path for new entrants. Instead, companies should consider these recommendations when navigating the path to enhanced value creation and value capture:Determine the urgency of change in your specific marketFocus on the most promising business typesPursue leveraged growth opportunitiesIdentify and, where possible, occupy emerging influence pointsDetermine the urgency of change in your specific marketAs consumer demands shift, the nature of products and production changes, and intermediaries disappear, we will see increasing fragmentation in the manufacturing landscape. As lowered barriers to forming a business intersect with increasing consumer demand for personalization, the manufacturing landscape will begin to fragment in ways that touch the consumer. We’ll likely see a wide range of individual players, each focusing on a small, addressable market around a specific niche; both niches and players will proliferate over time. Collectively, these businesses can address a broad spectrum of consumer and market needs, with no single player having enough market share to influence the long-term direction of its domain. This situation will be sustained by the need for only modest investment to enter and maintain one’s position, combined with “diseconomies of scale" that make it more difficult for larger players to compete at this level. Fragmentation will occur mostly around specialized product and service markets, with a wide range of small players either designing and assembling niche products or serving as supporting domain experts or contractors. We see this pattern now in the growth of small hardware startups associated with the maker movement, as well as with sellers on websites such as Etsy. However, accelerated technological change is likely to have a markedly different effect on this era of manufacturing than it has had in the past. Where before, new industry segments consolidated into a few dominant players as their industries matured, the future manufacturing landscape is poised to experience rapid, ongoing disruption leading to continuous fragmentation. Fragmentation will occur at varying rates and to varying degrees across regions, manufacturing subsectors, and product categories. All segments of manufacturing will eventually be affected, with timing and speed of disruption varying based on the industry’s exposure to shifting trends. Barriers to entry in the form of factors such as regulation, design complexity, size of product, and digitization will affect which subsectors first experience disruptive shifts. However, the speed of the shift will vary greatly even within industry segments—for example, electronic toy manufacturers will have very different experiences from makers of board games, stuffed animals, or building toys. Understanding the timing and speed of change in their industries and subsectors will help businesses assess when and where to play in these changing times. The factors at play aren’t static. The regulatory environment is constantly evolving in response to market needs. Product complexity, size, and digitization are all affected by exponentially evolving technologies. When considering these factors, it is important to evaluate not just the current placement of your product category, but also potential shifts that could accelerate fragmentation in parts of the business landscape.The regulatory environmentPublic policy and regulation play a profound role in the current and future structure of the manufacturing ecosystem. Trade agreements, labor relations, consumer safety and environmental regulations, and privacy and security restrictions all have the power to shape and shift its dynamics and economics. In a survey of 400 CEOs in all major industries, respondents listed the regulatory environment as their top concern, with more than 34 percent reporting spending an increasing amount of time with regulators and government officials.78 Industries with complex supply chains spanning multiple geographies can struggle to change practices developed in response to regulatory requirements. In general, the greater an industry’s regulation, the greater the barriers to entry and the slower the pace of fragmentation. Governments can speed the transition to a more fragmented manufacturing ecosystem by relaxing regulation and encouraging new entrants and innovation. For example, tax treatments in China’s Special Economic Zones spurred many foreign and domestic companies to relocate, quickly expanding the country’s manufacturing sector.Product complexity The more complex the product—measured by the number of components, the intricacy of component interactions, and the extent of product novelty—the more the parties designing parts of the final product must interact. In general, this factor matters most during design and prototyping. This means that, the more complex the product, the greater the value of in-house R&D or collaboration by a few tightly coupled players, and the more resources a manufacturer should have in house—and the more difficult disruption in the form of fragmentation becomes. However, this is not always the case, as exemplified by the first Apple iPod. Faced with an incredibly tight timeline, the designer, Portal Player, tightly defined boundary conditions for each product component, then invited multiple players to compete for the best design in each category. This approach allowed for greater innovation in the final product—as specialists worked on each part of the player—but led to more work for the engineers designing and testing how all the parts came together. Product complexity is also changing as a result of exponential technologies such as 3D printing. The advent of the 3D-printed car took the car from 20,000 parts to 40, significantly reducing product complexity—and enhancing the potential for smaller players to enter the design and final assembly market, leveraging the capability of a few large-scale component providers.79 3D-printed parts are also agnostic as to design complexity; complex geometries can be printed just as easily as a solid block.Product sizeRegardless of product complexity, physical product size matters. The larger the physical product, the more costly it is to prototype, manufacture, store, and ship. The equipment and space needed to tinker with a small consumer electronics device is much less than that required for a home appliance. And such requirements amplify as a product moves from tinkering to prototyping and on to production. Across the board, categories including larger products will be slower to fragment, in part because their shipping costs make up a significantly higher portion of the final delivered cost to the consumer. As Local Motors CEO Jay Rogers puts it, “For local to go big, big needs to go local."80 However, increasing product modularity plus new manufacturing processes can drive shifts in product size from large to small. The Tata Nano, India’s $2,000 “affordable car," was designed to be flat-packed and shipped for assembly close to the delivery point. Local Motors’ Rally Fighter can be purchased either as a fully assembled car or as a kit for self-assembly. Size, it turns out, is not always a static measure. DigitizationThe “more digital" a product or industry is—the more sensors and electronics it incorporates, or the more digitized its processes—the shorter its product cycles. Technology is evolving at a faster pace each year—products contain more and more digital technology, and so become obsolete more and more rapidly. With the greater use of digital manufacturing tools, an increasing number of physical objects being digitized, and a growing number of processes digitally transmitted and managed, the speed of evolution and collective learning increases, in turn speeding the fragmentation process. Consumer electronics and mobile phones have experienced this acceleration, facing ever-shorter product life cycles as a result. One counterpoint: If the software and applications on a product add more value that the product itself, it lengthens the product life cycle, since the software helps keep the product relevant. As more “dumb" products become “smart," digitization is reaching once-dumb manufactured products. The advent of categories such as wearables, connected cars, and smart lighting is likely to speed obsolescence as the technology in these products ages faster than the products themselves. Considering regulation, size, complexity, and digitization, and the movement of these factors in an industry, can help companies estimate the speed and intensity of coming shifts. The resulting estimates can help companies choose the best ways to participate in, and influence, the shifting manufacturing landscape. How fast is your industry or product segment fragmenting? Which factors—from regulatory environment to digitization—are driving that evolution? In the face of constant change, companies tend to step back and take a “sense and react" approach, watching the factors driving change and preparing themselves to react to new market conditions. Now, however, leaders have the opportunity to use deep understanding of these drivers to anticipate potential changes. They can then move their business in a direction both congruent with market forces and designed to position their company favorably.Focus on the most promising business typesThe ability to create and capture value will vary depending on the type of business. As discussed previously, the increasing demand for personalization and customization is poised to increase market fragmentation, while making it increasingly difficult for any single company to sustainably meet all of the consumer’s needs. The companies that do the best job of capturing value will be those that figure out how to work with, and use, fragmentation rather than fighting it. Scale will move upstream to components and platforms, while scope (via a greater diversity of assemblers) will move downstream, owning the “last mile" to the customer. We delve into these structural elements in much more detail in our paper The hero’s journey through the landscape of the future.81 Here, we present an overview of the coming landscape with a focus on manufacturing, in order to help participants determine which business roles might be most appropriate for them. Both incumbents and new entrants should be aware of possible roles in this system, and each business should determine the best fit based on its assets, strengths, and core DNA as a corporation. In general, large companies are well suited to take on infrastructure management or customer relationship roles, while smaller companies are best positioned to play as niche product and service businesses. Entities looking for sustained growth may not be able to achieve it in the more fragmented downstream landscape, but will need to shift upstream to achieve their growth goals. As product innovation, design, and assembly fragment, other parts of the business landscape will consolidate where scale and scope make it easier to support the niche operators. Areas of concentration will be marked by players, tightly focused on a single business type or role, that can muster the significant level of investment required to enter or sustain marketplace position in that role, and that generate value by leveraging resources such as large-scale technology infrastructure or big data to provide information, resources, and platforms to more fragmented businesses. Because these areas of concentration are driven by significant economies of scale and scope, early entrants that can quickly achieve critical mass are likely to gain a significant competitive advantage. Businesses that choose to focus on one of these roles are advised to be early movers rather than fast followers. We anticipate scale-and-scope operators to fall into three broad business roles:Infrastructure providersAggregation platformsAgent businessesInfrastructure providers deliver routine high-volume processes requiring large investments in physical infrastructure, such as transportation networks (e.g., as UPS and FedEx) and scale manufacturing plants (e.g., Flextronics and Foxconn). Infrastructure providers also exist in digital technology delivery (e.g., Amazon AWS and Cisco) and scale-intensive business processes (e.g., Infosys and Wipro). In the second category are aggregation platforms—virtual and physical platforms that foster connections, broker marketplaces, or aggregate data. For example, online marketplaces such as eBay and Etsy connect buyers and sellers; Kickstarter delivers financing by connecting artists, makers, and innovators with their fans; and Facebook connects people socially to share knowledge or information. The third category encompasses the role of agent. The consumer agent, a trusted advisor that helps consumers navigate an array of possible purchases, is the agent type most relevant to the manufacturing landscape. While agent businesses have always existed—from wealth managers to personal shoppers—their customer base has been mostly the affluent. Now, however, technology is making such services more widely available to the general population. In manufacturing, fragmentation in the area of final product assembly will give rise to agents that guide retail consumers to the right options for them. The retailers most likely to survive and thrive are those that embrace this role, becoming experts dedicated to supporting each consumer’s unique needs. The three roles above are based on scale and scope, making them attractive positions for companies looking to achieve significant and sustained growth. Businesses in these roles collaborate closely with the fragmented but focused niche players. In the resulting ecosystem of niche players supported by scale-and-scope businesses, “mobilizers" are the connective tissue that organizes an ecosystem to move in specific directions. Mobilizers can add value by framing explicit motivating goals, providing governance that enhances interactions, and facilitating collaboration. Maker Media’s Maker Education Initiative (slogan: “Every Child a Maker") is a good example of a mobilizer framing an explicit goal. In addition to its rallying cry to increase maker education, the group publishes programs and playbooks designed to provide governance and facilitate collaboration. It is not surprising for these roles to emerge in response to the shifts in the manufacturing landscape described earlier. Each role represents an essential business type. For example, fragmented niche operators are product businesses, focused on designing and developing creative new products and services, getting them to market quickly, and accelerating their adoption. This business type is driven by the economics of time and speed to market. It requires skills and systems focused on rapid design and development iteration, supporting the quick identification and addressing of market opportunities. The culture of this type of business prioritizes creative talent and is oriented toward supporting creative stars. Infrastructure providers and aggregation platforms are examples of infrastructure management businesses. This business type is driven by powerful scale economics. It requires skills to manage routine high-volume processing activities, and has a culture that prioritizes standardization, cost control, and predictability. In this business culture, the facility or asset trumps the human being. The agent role is an example of the customer relationship management business type, which is driven by economics of scope—building broader relationships with a growing number of customers. The more this business type knows about any individual customer, the more accurately it can recommend resources to that customer. Simultaneously, the more it knows about a large number of customers, the more helpful it can be to any individual based on its ability to see larger patterns. To succeed, such businesses need to understand the evolving context of each customer based on carefully structured interactions, plus a growing data set that captures context and history. The culture of this business type is relentlessly customer-focused—seeking to anticipate needs before they arise, building trust, and positioning the business as a trusted advisor rather than a sales-driven vendor. Aiming to become infrastructure management or customer relationship businesses can help large companies leverage existing economies of scale and scope to occupy the concentrating portions of the business landscape. Smaller companies, in contrast, are best served by aiming to become a product/service type of businesses, filling in the more fragmented portions of that landscape. Today, most large companies operate multiple types of businesses (and thus play multiple roles) within a single organization. Given the uncertainty of a rapidly changing world, such diversity is often viewed as a strategic advantage; a portfolio is comforting. However, when a company participates in too many business types at once, it can lack focus. Diverse groups compete for resources, chafe under inappropriate economics or metrics, and clash culturally. The reality is that the three business types bundled into today’s large enterprises have very different economics, skill sets, and cultures. In the past, large companies bundled these business types together because of the high cost and complexity of coordinating activity across independent companies. However, today’s ever more powerful digital infrastructure makes it far less expensive, and far easier, to coordinate activity across a growing number of independent entities. As competitive pressure intensifies, companies that keep the three business types tightly bundled will likely reduce performance as they seek to balance out the competing demands of these business types. Such businesses can become more vulnerable to companies that, by focusing on a single business type, become world-class in their chosen activities. Further, as the pace of change accelerates, the imperative to learn faster becomes more pronounced. A company that focuses on a single business type is likely to learn much faster without the distraction of multiple competing businesses within its walls. It is more likely to attract and retain world-class talent, gaining employees seeking to be the heroes of the organization rather than take on second-class support roles. Its learning potential can be further enhanced by the ability to connect and collaborate with trusted top-tier companies of the other two types. To flourish in an increasingly competitive environment, a company should resist the temptation to do everything. Instead, it should put its energies into one primary role. Given the divergent drivers, cultures, and focuses of the three business types, an organization that contains more than one can benefit from first separating them operationally within the firm. Then, over time, it can choose a primary type to prioritize as its company’s core DNA, ultimately shedding operations in the other two business types completely. Perhaps paradoxically, such unbundling can set the stage for much more sustained and profitable growth. Large incumbents may be understandably reluctant to let go of their current positions in the value chain. But failing to adapt to the new landscape is missing a powerful opportunity to own an influential new position in that chain—a foundational platform on which a large number of smaller players build. If this role is played out correctly, a new ecosystem of smaller, specialized niche providers will form around the large incumbent to customize and personalize products (through physical products, software, or services). All of these will be tied together by an entirely new set of players—mobilizers, data platforms, and connectivity platforms.Pursue leveraged growth opportunitiesHistorically, to achieve growth, entities had two options: buy or build. Advances in digital technology and connectivity allow for a third option, “leveraged growth," in which a business can connect with and mobilize a growing array of third parties in the fragmenting parts of the manufacturing landscape to create and capture value for its customers. Companies occupying the platform, infrastructure, and agent roles, which are inherently positioned for growth, can accelerate that growth and gain flexibility by leveraging trusted resources from outside their organizations. In addition to financial resources, such players can leverage the capabilities of its third-party partners. By doing so, they reduce risk, broaden their perspective to maximize learning and performance, and cut costs by taking advantage of existing resources. Just as important, they build a network of trusted relationships, a factor becoming more and more crucial in navigating the future manufacturing landscape. This level of transformation is very much in the domain of larger businesses—whether incumbent or entrant—with the resources to influence market factors. These businesses will be doubly successful if they develop strategies—and platforms—that allow them to attract and support a large number of smaller, more fragmented players. Leveraged growth can also help the larger business sense the shifting environment more accurately, and continue to shape it. In turn, smaller firms can leverage platform businesses for financing, learning, and prototyping, reducing capital investment while increasing speed to market. They can address surges in demand by relying on infrastructure providers, and can more effectively connect with relevant customers through agent businesses. Though they may have little power to move the market individually, they can maximize their influence as part of a broader ecosystem. Two potentially promising business models emerging in the manufacturing landscape can enable leveraged growth for large incumbents: the shift from products to platforms and from ownership to access. As digital and physical products become platforms, they enable a wide variety of participants to join, collaborate, and innovate. Platforms have a tremendous network effect, growing in importance as more participants join and thus extend their functionality. They are also a cheaper, more flexible, and less risky way for participants to enter a space. Once platforms gain traction and achieve a critical mass of participants, they become hard to replace. The shift from ownership to access allows manufacturers to transform their focus from making products to developing deep, long-term customer relationships. At the core of this shift is a platform that aggregates resources and enables consumer access. With it, consumers can access products as they need them. Manufacturers can use data collection and product use feedback to continually grow and improve. And as access providers gain a deeper knowledge of customers and their needs, they can identify and mobilize a broader range of third parties to enhance the value provided to customers.Identify and, where possible, occupy emerging influence pointsThere are still more ways to capture value in the rapidly shifting manufacturing landscape. With eroding barriers to entry and continued exponential growth of the digital infrastructure, many companies are seeing their positioning weaken. Strategic positions in the value chain—or influence points—are shifting. These positions are often key to enhancing value-capture potential. Power once derived from harboring stocks of knowledge now arises from an organization’s position in the flow of knowledge. While patents and intellectual property remain valuable, their strategic significance is declining as the pace of innovation increases and product life cycles shrink. New influence points are instead emerging around flows of knowledge. Privileged access to these flows makes it possible to identify and anticipate change before others do, and to shape them in a way that strengthens future positioning. Access to these diverse flows can also speed up learning—the key to competitive advantage in a quickly evolving market. So how do influence points emerge and evolve? They attract participants through the value they provide, and inspire action with positive incentives. Influence points are most likely to emerge where they can provide significant and sustainable functionality to the broader platform or ecosystem, where their functionality can evolve rapidly, where network effects drive consolidation and concentration of participants, and where they can encourage fragmentation of the rest of the platform or ecosystem. For example, in the early days of the personal computer industry, development of de facto standards for microprocessors and operating systems encouraged significant fragmentation in other aspects of the technology. These standards also created concentrations in knowledge flows as companies sought to connect with makers of the standard technologies to understand how they were likely to evolve.GE FirstBuild: Big companies behaving nimbly In February 2015, GE launched its first crowdfunding campaign on Indiegogo. The Paragon Induction Cooktop is a Bluetooth-enabled tabletop cooker created by GE subsidiary FirstBuild—and the test case for the company’s new manufacturing model. The campaign met its $50,000 funding goal in less than 24 hours and tripled it by the end of the day, reaching a total of nearly $300,000 at the time of publication.82 Funders and consumers may ask what a GE subsidiary is doing looking for crowdfunding. The answer has to do with the way products are developed at GE. The company excels in scale and lean manufacturing and is very good at producing high product volume at a low price. Product innovation and development, however, is another story. Where Indiegogo’s base of makers and small-scale entrepreneurs have speed on their side, large companies like GE can take two to three years to bring a new product to market, making it hard to keep up with market demands. It’s a perennial problem, and one common among large firms. In 2014 Kevin Nolan and Venkat Venkatakrishnan came up with a solution: a combined online and physical co-creation community for makers, designers, and engineers. The idea for FirstBuild came about when GE asked itself two simple questions: Why did it take so long to develop new products, and why could smaller hardware entrepreneurs develop them so much more quickly? The answer was equally simple: To build products quickly, GE needed to test more ideas with more people more frequently. It needed a system combining the capabilities of a large manufacturer and a lean startup. For FirstBuild, an Indiegogo launch complemented GE’s existing product development capabilities in several ways. Crowdfunding locks in sales before a product enters production, allowing for incredibly accurate demand forecasts and resulting manufacturing choices. If a campaign generates only a few pre-orders, FirstBuild can use a small manufacturing partner to produce the necessary units, then discontinue production without losing money on a larger effort. A hit like the Paragon can ensure big sales, and FirstBuild can leverage GE’s massive manufacturing capabilities to produce the needed units, avoiding stock outs. In both cases, crowdfunding generates immediate viability feedback before production, allowing the company to build to order. Crowdsourcing also helps FirstBuild guarantee minimum product revenue before launch; selling a crowdfunding campaign’s minimum number of units can fund some or all of a product’s fixed production costs. FirstBuild also acts as a test lab for shifts in the future manufacturing landscape. Integrating community into design, building, and sales directly addresses changing consumer needs. The Paragon cooker introduces smart cooking and integrated test software platforms and apps. By applying agile prototyping and tapping into the Chinese manufacturing ecosystem, FirstBuild is testing the shifting economics of manufacturing. And by selling directly to customers and building to order, it is shifting the economics of the value chain. In entering a space formerly inhabited by startups and individual makers, GE is changing the game for product development across the board—dramatically cutting development time and cost while insuring against large-scale failure. The effects on the industry are sure to be both fast and far-reaching. Another example of shifting influence points is the ongoing value shift from physical products to digital streams created by smart products. As products become more digitized, value shifts from the product itself to the stream the product enables. Here the greatest knowledge flows may have little to do with specific products; instead, they become part of the emerging IoT infrastructure. Such shifts tend to create new influence points further from the core capabilities of current manufacturing incumbents—points that favor large external players such as Google, Facebook, Apple, and Amazon. Google’s acquisition of home IoT device company Nest and Facebook’s acquisition of virtual reality startup Oculus VR make a lot more sense in this context—as do Google’s Android, Apple’s iPhone and iPad, and Amazon’s Kindle devices. As the manufacturing landscape and value chain evolve, old influence points will erode and new ones emerge. For established incumbents, doing nothing in this area is likely to lead to loss of influence and an erosion in the ability to capture value. To maintain or extend current levels of influence, manufacturers should evaluate their value chains, identifying current influence points and possible changes that could affect their position. Next, they should identify potential new influence points where they might establish strongholds. This may mean releasing elements once central to a firm’s value, and reimagining value in the context of potential positioning in the value stream. Big firms—both incumbents and new entrants—have an advantage here, as they tend to have resources valuable to a large number of fragmented players. Patent portfolios can be seen as a means to increase and focus knowledge flow, rather than as a static stock of knowledge or barrier to entry. GE took this path when it gave Quirky community members access to GE patents, encouraging innovation outside the initial patent domain. Clearly, not everyone can target and occupy influence points; by definition, there are only a few to be had, and doing so is not required for success. But businesses that can control influence points can create more sustainable advantages and get advance information about evolving markets. When navigating the path to enhanced value creation and value capture, a business should first determine how these ideas apply to its particular industry and its position within it, as well as to its organization and the products it produces. The next step is to determine the roles with the greatest potential for growth, exploring how it might shift to occupy one or more of those roles. Finally, the company should look for opportunities to collaborate with other players, large and small, in the relevant ecosystem—and determine how it might occupy emerging influence points. Given the ever-changing nature of the manufacturing landscape, such exploration and evolution are an ongoing process, one that businesses must continually follow if it wants to stay relevant.ConclusionThe manufacturing landscape is undergoing a massive collective shift. Consumer demands, the nature of products, and the economics of production and distribution are all evolving. Boundaries are blurring between manufacturing and technology on one hand and manufacturing and retail on the other. While more value is being created, manufacturers are under increasing pressure. In this environment, capturing value requires fundamentally rethinking business models—remapping a company’s strategic positioning based on internal capabilities, external shifts, and emerging influence points. Several large incumbents are making moves in these directions. GE Aviation moved from selling jet engines to selling power by the hour, as a utility company would. And savvy startups are developing business models in alignment with the new manufacturing landscape. Xiaomi started with a direct-sales model that prioritized consumer relationships, then eventually expanded to include traditional retail channels. The company knew that the influence point was closeness to the consumer; owning that space allowed it to develop good terms with retailers. The manufacturing landscape is facing dramatic changes. Creating and capturing value in this new environment will require understanding the factors driving change in specific manufacturing sectors, focusing on activities that convey a structural advantage, leveraging the skills and capabilities of third parties, fundamentally rethinking business models, and identifying influence points. There is no one path to success; instead, we offer a set of pointers and guideposts. Take this opportunity to define your own success—and blaze your own trail through the new landscape of manufacturing.

Picture 1

Brand Equity is Overrated -

Brand Equity is Overrated Andy Warhol once famously claimed that America’s tradition of mass production was what made it a great country. He said:“You can be watching TV and see Coca-Cola, and you can know that the President drinks Coke, Liz Taylor drinks Coke, and just think, you can drink Coke, too. A Coke is a Coke and no amount of money can get you a better Coke…all the Cokes are the same and all the Cokes are good. Liz Taylor knows it, the President knows it…and you know it."This kind of thinking, that every unit of a product should be exactly alike forever, has been part of the foundation of branding strategy for decades. Consumers had, in the past, relied on consistency as a measure of quality. But in 2017, the relationship that shoppers have (and what they want to have) with the brands that they buy has changed. Consumers are less trusting of big brands, and overreliance on sameness may be costing companies business with modern shoppers who are looking for more personal experiences. Even Coca-Cola, Warhol’s shining symbol of mass production, is embracing the trend towards customization in their bottle designs. They took a huge risk with their enormously successful “Share a Coke" campaign, where they replaced their legendary logo with 1,000 different names. Not only did this create a smart, personalized experience for consumers, it also showed that the company understood the need for branding that lends itself to social media engagement. A big part of the customization trend is that the evolving media landscape has transformed company-consumer interactions, so that there are more conversations and less one-way dialogue. The “Share a Coke" bottles made consumers feel excited about drinking something that has been in their family’s fridge for generations, and by risking their brand equity, Coca-Cola saw soft drink sales rise more than 2%. The company has taken this concept one step further with their “It’s Mine" campaign. Using HP’s SmartStream Mosaic software, Coca-Cola produced millions of glass Diet Coke bottles, each with a completely unique design. Purchasing one of these bottles means owning the only Diet Coke in the world that looks the way that it does – no movie star or President can drink one like it. This is the future of branding. When Tazo tea first came onto the scene in the 90’s, the spiritual, mythical look was considered innovative and modern — as The Dieline put it, the packaging “really represented the times". For years Tazo was associated with that new-age image, and the design remained virtually unchanged for about two decades, even after the brand joined forces with Starbucks. Once the coffee giant completed their own redesign in 2012, they decided that it was time to bring Tazo into the new millennium. What was once a fun standout in the boring tea market was now corny and outdated, and nearly every visual element that defined Tazo was thrown out. In its place was a clean, white background, with the flavors present in each variety clearly displayed in a neat little picture. The rebrand here was so successful because the company understood what was valuable about the product and maintained its spirit with the new look, while still being unafraid to go in a radically different direction than what fans were used to. Tazo pre-redesign Tazo post-redesign What is also interesting about the redesign is that nowhere on the packaging does it make any claim to be affiliated with Starbucks. Starbucks is one of the most recognizable and beloved brands in the world, and if the company was trying to introduce the tea to a new generation, then the association could have been a potentially valuable asset. The fact that they distanced the packaging from the Starbucks brand could indicate how the company anticipated consumers may come to feel about big brands. Unfortunately, years of pink slime exposés and soy chicken sandwich scares have conditioned consumers to be wary of brands that could be considered “Big Food". Today’s shoppers are drawn to brands that seem to care about them and their families, and the reputation of national brands as a whole is that they care far more about finding ethical shortcuts in order to increase profits. One of the core tenets of brand equity is name association, and if all shoppers can think of is artificial flavors and hormones, then brand equity is worthless. Hellmann’s has also recently had a redesign to better appeal to contemporary shoppers. The “deli-inspired" look and feel of the product gives off a more wholesome vibe, and the photographs of eggs play into consumers’ desire for fresh, easily understandable ingredients. The color palette isn’t an extraordinarily dramatic change from what Hellmann’s had before, but the jar does look different enough that many longtime buyers searching for that distinct yellow label will have a more difficult time finding it. Some may even abandon the brand altogether, afraid that Hellmann’s is either now “too fancy" for them or that the change in design signifies some kind of major difference in flavor. Hellmann’s knows that they face these risks, and yet has chosen to ditch their iconic packaging anyway in order to stay relevant. Ultimately, relevance does matter more than consumer loyalty. Some companies are forgoing their usual branding in order to compete in a specific local market. For example, Airbnb, which has been hugely successful in this new anti-big-brand economy, just announced that they are not even keeping their name consistent across all markets. In China, they are now calling themselves “Aibingyi", which is meant to be easier for Chinese users to pronounce. While it is not unprecedented for businesses to change their names when entering different markets, Airbnb faces unique risks in that this could cost them users that travel internationally, a group that is quickly growing. If a frequent Airbnb user from Sweden is vacationing in Shanghai, they may overlook the unfamiliar Aibingyi. Brand equity, while important, is overvalued by big brands. More than consistency, today’s shoppers value niche traits like individuality, freshness, and smallness. Scarred by many years of health scandals, consumers do not have faith in big brands that way that they used to, and brand recognition is no longer the coveted feature that it once was. In 2017, companies that hold on too tightly to their same old branding risk falling behind in the new economy. Click to continue reading . . .http://worksdesigngroup.com/brand-equity-is-overrated/

Picture 1

The Made In America Movement Driven By Innovation, Not Nationalism

The Made In America Movement Driven By Innovation, Not Nationalism A new generation of companies is making higher-end clothing, furniture, eyewear, and more in the United States. Can their success lead to a revival of American manufacturing on a larger scale? State Optical Factory [Photo: courtesy of State Optical] By Elizabeth Segran It’s a sunny morning in Boyle Heights, a working-class neighborhood in East Los Angeles. Marty Bailey, 55, is about to start his day as the head of manufacturing at the eco-chic label Reformation. The brand’s 33,500-square-foot headquarters houses the first fully sustainable sewing factory in the United States. When visitors stop by, they tend to notice the Curtis Kuling graffiti scrawled on the walls, the hip vintage furniture that populates the design studio, and employees tending to their plots in the community garden outside. But for Bailey, the most exciting thing about the factory is its totally reimagined manufacturing process. Marty Bailey Reformation is a fast fashion brand, constantly changing its product mix to keep up with the latest trends. But founder Yael Aflalo has upended each step of her supply chain to make it leaner, more nimble, and more environmentally friendly. A team of data scientists keeps track of best-selling outfits and conveys this information to Bailey, who is tasked with producing garments based on real-time demand. This ensures that the brand is delivering products that customers love, while eliminating wasted inventory. “Today, we’re making 300 maxi dresses," Bailey says. “Yesterday, we were making T-shirts. From a logistical and supply-chain perspective, that’s a very complicated thing to do. It’s a challenge, in the best possible way." This approach to manufacturing bears little resemblance to the Fruit of the Loom factory where Bailey first launched his career three decades ago. In 1984, Bailey had just graduated from Campbellsville University in Kentucky, got married, and had a baby girl on the way. He needed a job quick, so he took a position at “the Factory" (as the locals called it) on the western edge of Campbellsville. At the time, it was among the largest apparel-making operations in the world, with 700,000 square feet devoted to bleaching and dying fabric, cutting and sewing, and quality control. During each shift, Bailey remembers tens of thousands of professional sewers being shuttled in by school bus from seven counties, churning out more than 4 million identical plain white T-shirts and tightie-whities a week, filling the cavernous floor with the rhythmic buzzing and whirring of sewing machines. One sewer, June Judd, who spent 18 years at the factory, remembers sewing the fly onto 15,000 men’s briefs every single day. Sewers earned double the minimum wage–between $10 and $12 an hour–often making around $50,000 a year with overtime, or $75,888 a year with overtime, adjusted for inflation. The twists and turns of Bailey’s 33-year-long career tell a story about how factories in the United States hollowed out in the ’90s, then unexpectedly began to show signs of life again a decade ago. Bailey, it seems, is kind of like the Forrest Gump of American manufacturing: always at the right place at the right time when the industry is on the brink of transformation. He’s gone from Fruit of the Loom’s Kentucky sewing plant to setting up manufacturing facilities for American companies in El Salvador and Honduras, to developing high-tech operations for American Apparel, and now, Reformation’s factory in L.A. The shift is clear: These days, instead of massive conglomerates making generic products, a wave of tech-savvy startups are choosing to manufacture in America. Their reasons for going local often have little to do with patriotism. They’re primarily searching for better ways to create high-quality, state-of-the-art products and deliver them to customers faster than competitors making merchandise overseas. Click to continue reading . . . https://www.fastcompany.com/3068744/made-in-america-20

Picture 1

Customer Loyalty Is Overrated

Customer Loyalty Is Overrated Marketers spend a lot of time—and money—trying to delight consumers with ever-fresher, ever-more-appealing products. But their customers, it turns out, make most purchase decisions almost automatically. They look for what’s familiar and easy to buy. This package explores that idea and the science behind it, offers a counterpoint, and includes conversations with the cochairman of the LEGO Brand Group and the chairman of Intuit.In this spotlightOld Habits Die Hard, but They Do Die CounterpointRita Gunther McGrathDollar Shave Club’s subscription model is a striking illustration.“Habit is how we build the connection." In PracticeDavid ChampionA conversation with Jørgen Vig Knudstorp, cochairman of the LEGO Brand Group“A product that lets people hold on to their habits" In PracticeDavid ChampionAn interview with Intuit chairman and cofounder Scott CookLate in the spring of 2016 Facebook’s category-leading photo-sharing application, Instagram, abandoned its original icon, a retro camera familiar to the app’s 400- million-plus users, and replaced it with a flat modernist design that, as the head of design explained, “suggests a camera." At a time when Instagram was under a growing threat from its rival Snapchat, he offered this rationale for the switch: The icon “was beginning to feel…not reflective of the community, and we thought we could make it better." The assessment of AdWeek, the marketing industry bible, was clear from its headline: “Instagram’s New Logo Is a Travesty. Can We Change it Back? Please?" In GQ’s article “Logo Change No One Wanted Just Came to Instagram," the magazine’s panel of designers called the new icon “honestly horrible," “so ugly," and “trash," and summarized the change thus: “Instagram spent YEARS building up visual brand equity with its existing logo, training users where to tap, and now instead of iterating on that, it’s flushing it all down the toilet for the homescreen equivalent of a Starburst." It’s too soon to tell whether the design change will actually have commercial consequences for Instagram, but this is not the first time a company has experienced such a reaction to a rebranding or a relaunch. PepsiCo’s introduction of its aspartame-free Diet Pepsi was—like the infamous New Coke debacle—a botched attempt at reinvention that resulted in serious revenue losses and had to be reversed. The interesting question, therefore, is: Why do well-performing companies routinely succumb to the lure of radical rebranding? One could understand the temptation to adopt such a strategy in the face of disaster, but Instagram, PepsiCo, and Coke were hardly staring into the abyss. (It’s worth noting that Snapchat, whose market share among young users is now particularly strong, has assiduously stuck to its familiar ghost icon. Full disclosure: A.G. Lafley serves on the board of Snap Inc.) The answer, we believe, is rooted in some serious misperceptions about the nature of competitive advantage. Much new thinking in strategy argues that the fast pace of change in modern business (perhaps nowhere more obvious than in the app world) means no competitive advantage is sustainable, so companies must continually update their business models, strategies, and communications to respond in real time to the explosion of choice that ever more sophisticated consumers now face. To keep your customers—and to attract new ones—you need to remain relevant and superior. Hence Instagram was doing exactly what it was supposed to do: changing proactively.What’s In an Icon?The Instagram icon on the right was vilified by the online community, which had become used to the one on the left. Instagram made the change out of a mistaken belief that the image of a traditional camera was not relevant for users who had never owned one. That’s an edgy thought, to be sure; but a lot of evidence contradicts it. Consider Southwest Airlines, Vanguard, and IKEA, all featured in Michael Porter’s classic 1996 HBR article “What Is Strategy?" as exemplars of long-lived competitive advantage. A full two decades later those companies are still at the top of their respective industries, pursuing largely unchanged strategies and branding. And although Google, Facebook, or Amazon might stumble and be crushed by some upstart, the competitive positions of those giants hardly look fleeting. Closer to home (one author of this article is part of the P&G family), it would strike the Tide or Head & Shoulders brand managers of the past 50 years as rather odd to hear that their half-century advantages have not been or are not sustainable. (No doubt the Unilever managers of long-standing consumer favorites such as Dove soap and Hellmann’s mayonnaise would feel the same.) In this article we draw on modern behavioral research to offer a theory about what makes competitive advantage last. It explains both missteps like Instagram’s and success stories like Tide’s. We argue that performance is sustained not by offering customers the perfect choice but by offering them the easy one. So even if a value proposition is what first attracted them, it is not necessarily what keeps them coming. In this alternative worldview, holding on to customers is not a matter of continually adapting to changing needs in order to remain the rational or emotional best fit. It’s about helping customers avoid having to make yet another choice. To do that, you have to create what we call cumulative advantage. Let’s begin by exploring what our brains actually do when we shop.Creatures of HabitThe conventional wisdom about competitive advantage is that successful companies pick a position, target a set of consumers, and configure activities to serve them better. The goal is to make customers repeat their purchases by matching the value proposition to their needs. By fending off competitors through ever-evolving uniqueness and personalization, the company can achieve sustainable competitive advantage. An assumption implicit in that definition is that consumers are making deliberate, perhaps even rational, decisions. Their reasons for buying products and services may be emotional, but they always result from somewhat conscious logic. Therefore a good strategy figures out and responds to that logic. But the idea that purchase decisions arise from conscious choice flies in the face of much research in behavioral psychology. The brain, it turns out, is not so much an analytical machine as a gap-filling machine: It takes noisy, incomplete information from the world and quickly fills in the missing pieces on the basis of past experience. Intuition—thoughts, opinions, and preferences that come to mind quickly and without reflection but are strong enough to act on—is the product of this process. It’s not just what gets filled in that determines our intuitive judgments, however. They are heavily influenced by the speed and ease of the filling-in process itself, a phenomenon psychologists call processing fluency. When we describe making a decision because it “just feels right," the processing leading to the decision has been fluent. Processing fluency is itself the product of repeated experience, and it increases relentlessly with the number of times we have the experience. Prior exposure to an object improves the ability to perceive and identify that object. As an object is presented repeatedly, the neurons that code features not essential for recognizing the object dampen their responses, and the neural network becomes more selective and efficient at object identification. In other words, repeated stimuli have lower perceptual-identification thresholds, require less attention to be noticed, and are faster and more accurately named or read. What’s more, consumers tend to prefer them to new stimuli. In short, research into the workings of the human brain suggests that the mind loves automaticity more than just about anything else—certainly more than engaging in conscious consideration. Given a choice, it would like to do the same things over and over again. If the mind develops a view over time that Tide gets clothes cleaner, and Tide is available and accessible on the store shelf or the web page, the easy, familiar thing to do is to buy Tide yet another time. A driving reason to choose the leading product in the market, therefore, is simply that it is the easiest thing to do: In whatever distribution channel you shop, it will be the most prominent offering. In the supermarket, the mass merchandiser, or the drugstore, it will dominate the shelf. In addition, you have probably bought it before from that very shelf. Doing so again is the easiest possible action you can take. Not only that, but every time you buy another unit of the brand in question, you make it easier to do—for which the mind applauds you. Each time you choose a product, it gains advantage over those you didn’t choose. Meanwhile, it becomes ever so slightly harder to buy the products you didn’t choose, and that gap widens with every purchase—as long, of course, as the chosen product consistently fulfills your expectations. This logic holds as much in the new economy as in the old. If you make Facebook your home page, every aspect of that page will be totally familiar to you, and the impact will be as powerful as facing a wall of Tide in a store—or more so. Buying the biggest, easiest brand creates a cycle in which share leadership is continually increased over time. Each time you select and use a given product or service, its advantage over the products or services you didn’t choose cumulates. The growth of cumulative advantage—absent changes that force conscious reappraisal—is nearly inexorable. Thirty years ago Tide enjoyed a small lead of 33% to 28% over Unilever’s Surf in the lucrative U.S. laundry detergent market. Consumers at the time slowly but surely formed habits that put Tide further ahead of Surf. Every year, the habit differential increased and the share gap widened. In 2008 Unilever exited the business and sold its brands to what was then a private-label detergent manufacturer. Now Tide enjoys a greater than 40% market share, making it the runaway leader in the U.S. detergent market. Its largest branded competitor has a share of less than 10%. (For a discussion of why small brands even survive in this environment, see the sidebar “The Perverse Upside of Customer Disloyalty.")The Perverse Upside of Customer DisloyaltyIf consumers are slaves of habit, it’s hard to argue that they are “loyal" customers in the sense that they consciously attach themselves to a brand on the assumption that it meets rational or emotional needs. In fact, customers are much more fickle than many marketers assume: Often the brands that are believed to depend on loyal customers achieve the lowest loyalty scores. For example, Colgate and Crest are the leading toothpaste brands in the U.S. market, with about 75% of it between them. Customers for both are loyal 50% of the time (their preferred brand accounts for 50% of their annual toothpaste purchases). Tom’s toothpaste, a niche “natural" brand based in Maine, has a 1% market share and is thought to have a fanatical customer following. One might expect the data to show that the 1% are mostly repeat buyers. But in fact Tom’s customers are loyal only 25% of the time—half the rate of the big brands. So why do fringe brands like Tom’s survive? The answer, perhaps perversely, is that with big-brand loyalty rates at 50%, just enough customers will buy small brands from time to time to keep the latter in business. But the small brands can’t overcome the familiarity barrier, and although entirely new brands do enter categories and become leaders, it is extremely rare for a small fringe brand to successfully take on an established leader. Read moreA Complement to ChoiceWe don’t claim that consumer choice is never conscious, or that the quality of a value proposition is irrelevant. To the contrary: People must have a reason to buy a product in the first place. And sometimes a new technology or a new regulation enables a company to radically lower a product’s price or to offer new features or a wholly new solution to a customer need in a way that demands consumers’ consideration. Robust where-to-play and how-to-win choices, therefore, are still essential to strategy. Without a value proposition superior to those of other companies that are attempting to appeal to the same customers, a company has nothing to build on. But if it is to extend that initial competitive advantage, the company must invest in turning its proposition into a habit rather than a choice. Hence we can formally define cumulative advantage as the layer that a company builds on its initial competitive advantage by making its product or service an ever more instinctively comfortable choice for the customer. Companies that don’t build cumulative advantage are likely to be overtaken by competitors that succeed in doing so. A good example is Myspace, whose failure is often cited as proof that competitive advantage is inherently unsustainable. Our interpretation is somewhat different. Launched in August 2003, Myspace became America’s number one social networking site within two years and in 2006 overtook Google to become the most visited site of any kind in the United States. Nevertheless, a mere two years later it was outstripped by Facebook, which demolished it competitively—to the extent that Myspace was sold in 2011 for $35 million, a fraction of the $580 million that News Corp had paid for it in 2005. Why did Myspace fail? Our answer is that it didn’t even try to achieve cumulative advantage. To begin with, it allowed users to create web pages that expressed their own personal style, so individual pages looked very different to visitors. It also placed advertising in jarring ways—and included ads for indecent services, which riled regulators. When News Corp bought Myspace, it ramped up ad density, further cluttering the site. To entice more users, Myspace rolled out what Bloomberg Businessweek referred to as “a dizzying number of features: communication tools such as instant messaging, a classifieds program, a video player, a music player, a virtual karaoke machine, a self-serve advertising platform, profile-editing tools, security systems, privacy filters, Myspace book lists, and on and on." So instead of making its site an ever more comfortable and instinctive choice, Myspace kept its users off balance, wondering (if not subconsciously worrying) what was coming next. Compare that with Facebook. From day one, Facebook has been building cumulative advantage. Initially it had some attractive features that Myspace lacked, making it a good value proposition, but more important to its success has been the consistency of its look and feel. Users conform to its rigid standards, and Facebook conforms to nothing or no one else. When it made its now-famous extension from desktop to mobile, the company ensured that users’ mobile experience was highly consistent with their desktop experience. To be sure, Facebook has from time to time introduced design changes in order to better leverage its functionality, and it has endured severe criticism in consequence. But in the main, new service introductions don’t jeopardize comfort and familiarity, and the company has often made the changes optional in their initial stages. Even its name conjures up a familiar artifact, the college facebook, whereas Myspace gives the user no familiar reference at all. Bottom line: By building on familiarity, Facebook has used cumulative advantage to become the most addictive social networking site in the world. That makes its subsidiary Instagram’s decision to change its icon all the more baffling.The Cumulative Advantage ImperativesMyspace and Facebook nicely illustrate the twin realities that sustainable advantage is both possible and not assured. How, then, might the next Myspace enhance and extend its competitive edge by building a protective layer of cumulative advantage? Here are four basic rules to follow:1. Become popular early.This idea is far from new—it is implicit in many of the best and earliest works on strategy, and we can see it in the thinking of Bruce Henderson, the founder of Boston Consulting Group. Henderson’s particular focus was on the beneficial impact of cumulative output on costs—the now-famous experience curve, which suggests that as a company’s experience in making something increases, its cost management becomes more efficient. He argued that companies should price aggressively early on—“ahead of the experience curve," in his parlance—and thus win sufficient market share to give the company lower costs, higher relative share, and higher profitability. The implication was clear: Early share advantage matters—a lot. Marketers have long understood the importance of winning early. Launched specifically to serve the fast-growing automatic washing machine market, Tide is one of P&G’s most revered, successful, and profitable brands. When it was introduced, in 1946, it immediately had the heaviest advertising weight in the category. P&G also made sure that no washing machine was sold in America without a free box of Tide to get consumers’ habits started. Tide quickly won the early popularity contest and has never looked back. BlackBerry may be the best example of a conscious design for addiction. Free new-product samples to gain trial have always been a popular tactic with marketers. Aggressive pricing, the tactic favored by Henderson, is similarly popular. Samsung has emerged as the market share leader in the smartphone industry worldwide by providing very affordable Android-based phones that carriers can offer free with service contracts. For internet businesses, free is the core tactic for establishing habits. Virtually all the large-scale internet success stories—eBay, Google, Twitter, Instagram, Uber, Airbnb—make their services free so that users will grow and deepen their habits; then providers or advertisers will be willing to pay for access to them.2. Design for habit.As we’ve seen, the best outcome is when choosing your offering becomes an automatic consumer response. So design for that—don’t leave the outcome entirely to chance. We’ve seen how Facebook profits from its attention to consistent, habit-forming design, which has made use of its platform go beyond what we think of as habit: Checking for updates has become a real compulsion for a billion people. Of course Facebook benefits from increasingly huge network effects. But the real advantage is that to switch from Facebook also entails breaking a powerful addiction. The smartphone pioneer BlackBerry is perhaps the best example of a company that consciously designed for addiction. Its founder, Mike Lazaridis, explicitly created the device to make the cycle of feeling a buzz in the holster, slipping out the BlackBerry, checking the message, and thumbing a response on the miniature keyboard as addictive as possible. He succeeded: The device earned the nickname CrackBerry. The habit was so strong that even after BlackBerry had been brought down by the move to app-based and touch-screen smartphones, a core group of BlackBerry customers—who had staunchly refused to adapt—successfully implored the company’s management to bring back a BlackBerry that resembled their previous-generation devices. It was given the comforting name Classic. As Art Markman, a psychologist at the University of Texas, has pointed out to us, certain rules should be respected in designing for habit. To begin with, you must keep consistent those elements of the product design that can be seen from a distance so that buyers can find your product quickly. Distinctive colors and shapes like Tide’s bright orange and the Doritos logo accomplish this.The Science: How Habit Beats NoveltyMarketers spend time and money trying to make products stand out so that they’ll be chosen. But what if novelty is having the opposite effect?Scott Berinato Because people are creatures of habit, they are blind to novelty. Our brains use heuristics and experience to decide what something is, often skipping over unexpected or novel aspects of a scene. The neuroscientist Moshe Bar posits that the brain “is continuously busy generating predictions that approximate the relevant future." He says, “We think that when we look at something, the brain asks, What is this? But really it asks, What is this like?" That is, we are matching input from the world to things we have encountered before. This rapid prediction process is the mental equivalent of the old game show Name That Tune. The more you’ve heard the song, the fewer notes it takes to recognize it. The less energy required to recognize something, the better. The goal of the marketer is to get the consumer to buy that brand in just one note. Constantly changing the melody and words won’t help. The flip side of our blindness to novelty is that the more consistent an object remains, the less work the brain needs to do to identify (and choose) it. As long ago as 1910, researchers called this phenomenon the “warm glow of familiarity"; now there’s neurological evidence that it exists. Tide is a classic example of a product we recognize without much thought. Research has shown that we respond to placement on a store shelf, color, shape, and spatial orientation (in that order). In a process called perceptual priming, the brain relies on those clues. Over time it needs less information and uses less power to recognize a familiar object than to recognize something new. Apparently this is a closely guarded secret, because marketers spend time and money creating novelty. But new packaging for an established product may not have the intended effect. Change meant to freshen or energize a product line may actually cause consumers to overlook the new design as they search for what they are in the habit of seeing. In a test of this change blindness, product managers were asked to locate a new design for their own brand on a shelf, and they couldn’t easily do it.The Power of Implicit Memory Once images have been embedded, the degree to which they stick with us is extraordinary. In one study, David Mitchell, of Kennesaw State University, showed his subjects images similar to A, above, multiple times, priming their implicit memory. Later he showed them fragments (similar to B) of the pictures they had originally seen, along with “novel fragments" of pictures they hadn’t. Subjects were far more likely to recognize the images they had seen before than the new ones. Here’s the kicker: Mitchell’s follow-up came 17 years after that priming. Some subjects didn’t even recall that they’d taken part in the study. Even years later, people can identify things they’ve encountered before more easily than things they haven’t—which should serve as a warning to marketers who value novelty over habit. Scott Berinato is a senior editor at HBR.Read more And you should find ways to make products fit in people’s environments to encourage use. When P&G introduced Febreze, consumers liked the way it worked but did not use it often. Part of the problem, it turned out, was that the container was shaped like a glass-cleaner bottle, signaling that it should be kept under the sink. The bottle was ultimately redesigned to be kept on a counter or in a more visible cabinet, and use after purchase increased. Unfortunately, the design changes that companies make all too often end up disrupting habits rather than strengthening them. Look for changes that will reinforce habits and encourage repurchase. The Amazon Dash Button provides an excellent example: By creating a simple way for people to reorder products they use often, Amazon helps them develop habits and locks them into a particular distribution channel.3. Innovate inside the brand.As we’ve already noted, companies engage in initiatives to “relaunch," “repackage," or “replatform" at some peril: Such efforts can require customers to break their habits. Of course companies have to keep their products up-to-date, but changes in technology or other features should ideally be introduced in a manner that allows the new version of a product or service to retain the cumulative advantage of the old. Even the most successful builders of cumulative advantage sometimes forget this rule. P&G, for example, which has increased Tide’s cumulative advantage over 70 years through huge changes, has had to learn some painful lessons along the way. Arguably the first great detergent innovation after Tide’s launch was the development of liquid detergents. P&G’s first response was to launch a new brand, called Era, in 1975. With no cumulative advantage behind it, Era failed to become a major brand despite consumers’ increasing substitution of liquid for powdered detergent. Recognizing that as the number one brand in the category, Tide had a strong connection with consumers and a powerful cumulative advantage, P&G decided to launch Liquid Tide in 1984, in familiar packaging and with consistent branding. It went on to become the dominant liquid detergent despite its late entry. After that experience, P&G was careful to ensure that further innovations were consistent with the Tide brand. When its scientists figured out how to incorporate bleach into detergent, the product was called Tide Plus Bleach. The breakthrough cold-cleaning technology appeared in Tide Coldwater, and the revolutionary three-in-one pod form was launched as Tide Pods. The branding could not have been simpler or clearer: This is your beloved Tide, with bleach added, for cold water, in pod form. These comfort- and familiarity-laden innovations reinforced rather than diminished the brand’s cumulative advantage. The new products all preserved the look of Tide’s traditional packaging—the brilliant orange and the bull’s-eye logo. The few times in Tide history when that look was altered—such as with blue packaging for the Tide Coldwater launch—the effect on consumers was significantly negative, and the change was quickly reversed. Of course, sometimes change is absolutely necessary to maintain relevance and advantage. In such situations smart companies succeed by helping customers transition from the old habit to the new one. Netflix began as a service that delivered DVDs to customers by mail. It would be out of business today if it had attempted to maximize continuity by refusing to change. Instead, it has successfully transformed itself into a video streaming service. Although the new Netflix markets a completely different platform for digital entertainment, involving a new set of activities, Netflix found ways to help its customers by accentuating what did not have to change. It has the same look and feel and is still a subscription service that gives people access to the latest entertainment without leaving their homes. Thus its customers can deal with the necessary aspects of change while maintaining as much of the habit as possible. For customers, “improved" is much more comfortable and less scary than “new," however awesome “new" sounds to brand managers and advertising agencies.4. Keep communication simple.One of the fathers of behavioral science, Daniel Kahneman, characterized subconscious, habit-driven decision making as “thinking fast" and conscious decision making as “thinking slow." Marketers and advertisers often seem to live in thinking-slow mode. They are rewarded with industry kudos for the cleverness with which they weave together and highlight the multiple benefits of a new product or service. True, ads that are clever and memorable sometimes move customers to change their habits. The slow-thinking conscious mind, if it decides to pay attention, may well say, “Wow, that is impressive. I can’t wait!" But if viewers aren’t paying attention (as in the vast majority of cases), an artful communication may backfire. Consider the ad that came out a couple of years ago for the Samsung Galaxy S5. It began by showing successive vignettes of generic-looking smartphones failing to (a) demonstrate water resistance; (b) protect against a young child’s accidentally sending an embarrassing message; and (c) enable an easy change of battery. It then triumphantly pointed out that the Samsung S5, which looked pretty much like the three previous phones, overcame all these flaws. Conscious, slow-thinking viewers, if they watched the whole ad, may have been persuaded that the S5 was different from and superior to other phones. But an arguably greater likelihood was that fast-thinking viewers would subconsciously associate the S5 with the three shortcomings. When making a purchase decision, they might be swayed by a subconscious plea: “Don’t buy the one with the water-resistance, rogue-message, and battery-change problems." In fact, the ad might even induce them to buy a competitor’s product—such as the iPhone 7—whose message about water resistance is simpler to take in. Remember: The mind is lazy. It doesn’t want to ramp up attention to absorb a message with a high level of complexity. Simply showing the water resistance of the Samsung S5—or better yet, showing a customer buying an S5 and being told by the sales rep that it was fully water-resistant—would have been much more powerful. The latter would tell fast thinkers what you wanted them to do: go to a store and buy the Samsung S5. Of course, neither of those ads would be likely to win any awards from marketers focused on the cleverness of advertising copy.Must ReadsExperts have been debating the nature of competitive advantage for years. Below are four standout articles from HBR that articulate the most influential thinking on the subject. “What Is Strategy?" by Michael E. PorterIn this classic 1996 article, Porter argues that operational effectiveness, although necessary to superior performance, is not sufficient, because its techniques are easy to imitate. The essence of strategy is choosing a unique and valuable position rooted in activities that are much more difficult to match. “The One Number You Need to Grow" by Frederick F. ReichheldThis 2003 article introduced the Net Promoter Score—a simple measure of a customer’s willingness to recommend a product. NPS is a reliable index to loyalty, says Reichheld, and the best predictor of top-line growth. “Transient Advantage" by Rita Gunther McGrathMcGrath contends that business leaders are overly fixated on creating a sustainable competitive advantage. Business today is too turbulent to spend months crafting a long-term strategy, she says in this 2013 article. Rather, leaders need a portfolio of transient advantages that can be built quickly and abandoned just as rapidly. “When Marketing Is Strategy" by Niraj DawarFor decades, businesses have sought competitive advantage in upstream activities related to making new products—bigger factories, cheaper raw materials, efficiency, and so on. But those are all easily copied. Advantage, says Dawar in this 2013 article, increasingly lies in the marketplace. The important question is not “What else can we make?" but “What else can we do for our customers?" Read moreCONCLUSIONThe death of sustainable competitive advantage has been greatly exaggerated. Competitive advantage is as sustainable as it has always been. What is different today is that in a world of infinite communication and innovation, many strategists seem convinced that sustainability can be delivered only by constantly making a company’s value proposition the conscious consumer’s rational or emotional first choice. They have forgotten, or they never understood, the dominance of the subconscious mind in decision making. For fast thinkers, products and services that are easy to access and that reinforce comfortable buying habits will over time trump innovative but unfamiliar alternatives that may be harder to find and require forming new habits. So beware of falling into the trap of constantly updating your value proposition and branding. And any company, whether it is a large established player, a niche player, or a new entrant, can sustain the initial advantage provided by a superior value proposition by understanding and following the four rules of cumulative advantage. A.G. Lafley, the recently retired CEO of Procter & Gamble, serves on the board of Snap Inc. Roger L. Martin is a professor at and the former dean of the Rotman School of Management at the University of Toronto. He is a coauthor of Playing to Win (Harvard Business Review Press, 2013). Back to the Top Counterpoint Old Habits Die Hard, but They Do Die by Rita Gunther McGrath I love the notion that customers’ purchase decisions are more closely related to habit and ease than to loyalty—it brings much-needed insight from behavioral science to the study of consumer decisions. And, as Lafley and Martin suggest, it has major implications for how products are developed and brands are managed. I completely agree with the authors that customers’ unconscious minds dominate their decision-making process—and I suspect that any company can benefit from making their routine choices easier, faster, and more convenient. That’s one reason the subscription model has become so popular in so many industries—it eliminates the need for customers to consciously decide about routine purchases and offers providers the lure of effortlessly recurring revenue. The theory of cumulative advantage makes a lot of sense in what Martin Reeves and his colleagues at BCG call a classical strategic setting—one in which industry boundaries are clearly delineated, the basis of competition is stable, the environment experiences no major disruptions, and a strong competitive position, once created, can be sustained. As BCG has shown, the candy company Mars has enjoyed very long product life cycles: Snickers and M&M’s (introduced in 1930 and 1941, respectively) are among the best-selling candies in the world today. Procter & Gamble has a similarly strong track record with Tide, Unilever with Dove, and PepsiCo with Tropicana orange juice.It Works Until It Doesn’tThe changing nature of competitive advantageAny theory that seeks to explain cause-and-effect relationships operates within a set of constraints. A theory that works beautifully under one set may fall apart under another. Over the years, we have seen systematic shifts in how companies create a strategically valuable position, often reinforced by the constraints of the systems within which they operate. In the early 1900s, for instance, companies that achieved economies of scope and scale through mass production were dominant, and they remained so right through the period after World War II. Indeed, the Fortune 500 list of 1970 reveals the dominance of huge U.S.-based industrial players such as General Motors, General Electric, Exxon Mobil, and Union Carbide. With the advent of communications and computational technology, strategic advantage began to shift toward companies that leveraged information technology to provide services in addition to goods, and toward models that placed a value on information utilization in addition to product features and functions. Although the industrial giants remained in place for a long time, companies such as Walmart, AIG, Enron, and Citigroup had joined them on the Fortune 500 list by 1995. Today the dynamics of competitive advantage have shifted once more. Companies are achieving advantage through access to assets rather than ownership of them. In addition, a whole new category of “platform" companies, such as Google, Apple, and Facebook, have emerged, and the very size of their customer base creates a reinforcing virtuous cycle. Often called network effects, these dynamics mean that the more customers a company has, the more valuable it is to each additional customer. In such cases being an early mover can result in a formidable advantage. The point is that every theory has its constraints. Attempting to apply it outside those conditions can lead to disaster.Read more But for a growing number of companies, those conditions don’t apply. Their industry boundaries aren’t clearly delineated—in fact, they’re totally blurry. Just ask anyone in retail, entertainment, or telecommunications. Their environments aren’t stable—companies can be disrupted by entrants from below, as Clayton Christensen has pointed out, but also by competitors using a different business model or moving over from an adjacent industry. And long-standing competitive strengths can be upended almost overnight by someone who has digitized your physical business (hello, Encyclopaedia Britannica) or turned your product into a service (see Zipcar, Airbnb, and Uber). Apple and Google didn’t necessarily intend to disrupt point-and-shoot cameras, stand-alone GPS devices, TV advertising, or the Weather Channel, but they did so nonetheless.Strategic Inflection PointsFor some time my argument has been that we need a new way of thinking about strategy in environments where traditional barriers to entry are eroding, or in which emerging technologies weaken constraints. Andy Grove’s phrase inflection point captures this situation nicely. A strategic inflection point, he says, is “a time in the life of a business when its fundamentals are about to change." Inflection points are difficult for traditional strategy tools to address, because they usually don’t look important at first. The Wright brothers proved it was possible to fly safely in 1903. Nobody took that seriously until 1908. Even with the 1914 launch of the first commercial flight, few realized that airplanes would upend industries as varied as railroads, steamships, and package delivery. Consumer habits can be powerful aids to sustaining a competitive advantage, as Lafley and Martin quite correctly point out. But habits, like other elements of the environment, can change. And when new technologies make new business models viable, habits can change very fast. Consider the powerful forces that were unleashed from 2004 to 2007 by four separate but linked business developments. In 2004 Facebook was founded. In 2005 YouTube was founded. In 2006 Amazon launched Amazon Web Services (AWS). In 2007 Apple’s iPhone and Google’s Android operating system were commercially released. As the technology analyst Ben Thompson points out, AWS made it easy and cheap to start an online company, YouTube made it easy and cheap to upload videos, and Facebook offered a ready-made channel for sharing such videos. I’d add that the wild popularity of mobile phones made all that available to ordinary people. Now a couple of guys with an idea and access to programming skills can rival global giants in days or weeks, not months or years—with practically no assets.Gillette Versus Dollar ShaveAnd that’s exactly what happened with the 2012 launch of DollarShaveClub.com. The brand promise was simple: great razors with few frills, for a low subscription price, delivered to your door automatically. Not only did you save money, but you didn’t have to visit a store or risk running out. This was all the more attractive because habitual buying behavior had already been disrupted: Razor blades are expensive and easy to steal, so it has become common for them to be kept under lock and key in stores. Today, although Dollar Shave Club has an 8% share of the $3 billion U.S. market for blades and razors, the far more important number is its “share of cartridge." That, according to recent sources, is an astonishing 15% of all cartridges sold. In 2010 Gillette had 70% of the global shaving market and legions of loyal customers who reliably traded up as the next generation of products, with higher prices, were released. Procter & Gamble had acquired the brand in 2005 for a reported $57 billion. It was a classic high-market-share, high-quality business—and we can only assume from their track records that both Gillette and P&G were extremely good at getting customers to buy habitually. Clearly they had a strong cumulative advantage. But that wasn’t enough, because the business had hit an inflection point. In July 2016 Unilever agreed to buy Dollar Shave Club for about $1 billion in cash. The founding entrepreneurs are happy. Their investors are happy. Their customers are clearly happy. The incumbents? Not so much. According to the Wall Street Journal, P&G’s share of men’s razors and blades had fallen to 59% in 2015. One of its responses was to launch the Gillette Shave Club. Having seen the potentially habit-destroying effects of the subscription model, P&G now offers subscription and delivery for other products—including expensive Tide Pods. Twenty years ago it would have been inconceivable that a marketing message could reach 20 million people in a matter of weeks without massive spending on television and other advertising. But Dollar Shave Club accomplished that with an entertaining launch video, promotion on social media channels, and a group of enthusiastic brand ambassadors who provided feet on the ground to promote its products—free.Leveraging the Familiar Even as You ReinventThe point of this story is that even a company as storied as P&G can be taken by surprise. Which brings me to the tricky question, How can executives balance the formidable power of cumulative advantage and habit, often associated with a brand, with the need to refresh their approach? One practical tactic is to leverage the core skills or capabilities of an organization in a new format. Target offers an illustrative case. The company’s roots were in a traditional department store, Dayton’s, which became Dayton Hudson and eventually Marshall Field’s. In 1960 its leadership saw an opportunity to reach a market segment that appeared to be growing but wasn’t well served by the existing format. That segment consisted of value-conscious consumers who nonetheless appreciated good design and a reasonably pleasant shopping experience. To protect the then-dominant department store brand, the new venture was branded separately. Its iconic bull’s-eye logo was meant to represent the notion of hitting the target of convenience, price, and customer experience. How can leaders balance cumulative advantage with the need for a fresh approach? By the mid-1970s Target stores were outselling the company’s department stores. In 2000 Dayton Hudson changed its name to Target to reflect the reality of its now-core business. In 2004 the company sold its department store brands, completing an extraordinary retail transformation. Another fascinating transformation that leveraged the core skills of a parent company is the relentless digitization pursued by the newspaper publisher Schibsted, of Norway. Unlike many other newspaper publishers, Schibsted saw the encroachment of digital classified advertisements as an opportunity rather than a threat to its business. Beginning in the late 1990s, its leaders aggressively courted classified advertisers to list with its digital properties. This became a crusade. As Sverre Munck observed when he was the EVP for strategy and international editorial, “The Internet was made for classifieds and classifieds were made for the Internet." Long a traditional media company, Schibsted was able to leverage deep ties with its advertisers with a model that permitted economies of scale in editorial and communication activities across its media brands. These were supplemented by a significant commitment to bringing technological capabilities into the very core of the media business, ending the tug-of-war between conventional editorial processes and the logic of digital transformation.A Balance of Stability and DynamismIn 2012 I wrote an HBR piece titled “How the Growth Outliers Do It." That analysis, which looked at 10 years of net income data from 2000 to 2009, found that out of 2,347 of the publicly traded firms with a market capitalization of more than $1 billion, only 10 had successfully grown net income by 5% or more in every one of those 10 years. (Although performance can be measured in many ways, this seems to me to be one that tests the idea of sustainable advantage consistently.) The first conclusion is obvious: Steady, sustained profit growth is hard to achieve, particularly in a period that includes the Great Recession of 2008. The second, however, is that some companies do manage to achieve it for relatively long periods of time. I found that those companies balanced elements of stability (culture, relationships, leadership, and even strategy) with elements of dynamism (rapid resource mobilization, marketplace experiments, and people mobility). I spoke recently with Malcolm Frank, a senior executive at Cognizant, which appears on both my original list and one that I’ve updated through the end of 2015 (for which I used modified criteria: If a company was over the threshold for any year in the previous 10 years, it was included on the list, which totaled roughly 5,300). Frank told me that his organization lives and breathes the idea that in many cases competitive advantage is not going to last. “For us, what was the ceiling five years ago is going to be the floor five years from now," he said. Cognizant is also disciplined about exiting slow-growth or underperforming operations. But it is remarkably stable. Francisco D’Souza has been CEO since 2007, and the most recent addition to the leadership team joined in 2005. Cognizant’s culture, too, reflects what its leaders call a “well-established set of cultural values," as demonstrated in their written documents, public statements, and go-to-market strategies.CONCLUSIONBut let’s return to the really important insight that underlies the argument of Lafley and Martin: Most of the time, we are all unaware of the true motivations behind the choices we make. The better strategists and marketers become at understanding those motivations, the more likely they are to succeed at building habitual behavior among consumers—and, just as important, the more likely they are to see how those habits might change. Clayton Christensen’s “jobs to be done" theory may come in handy here. He has famously said that when we buy products, we are actually hiring them to do a job for us. And the “jobs" underlying most product purchases are remarkably stable. Take communication: From smoke signals to the pony express to the telegraph to the telephone to the communications technologies of today, our basic job—to send messages to other human beings—has not changed. But how that job gets done has changed dramatically. If incumbent companies stay focused on the job itself—rather than on the specifics of how it gets done at this moment in time—they may be able to invent a better way before the competition does. This is a point that company leaders often miss. Customers can easily “hire" another solution that does a given job better—just as vast numbers of them are currently doing with razors bought by subscription. Rita Gunther McGrath, a professor of management at Columbia Business School, is a globally recognized expert on strategy, innovation, and growth with an emphasis on corporate entrepreneurship. Back to the Top In Practice “Habit is how we build the connection."A conversation with Jørgen Vig Knudstorp, cochairman of the LEGO Brand GroupInterviewDavid Champion Photography by Lasse Bech MartinussenHBR: Do you think people’s loyalty to LEGO is a function of habit? Knudstorp: I think it’s more than habit. When I became CEO, 13 years ago, and the LEGO Group was in crisis, people wrote letters to me and said, “Please don’t die. The world would be a poorer place without LEGO." When customers have an emotional connection with your brand, they’ll make an effort to get it, which I think means that they’re making a conscious choice. People don’t line up for days to get the iPhone 7 because it’s the automatic choice. Of course, not all products can hope to create an emotional connection with the consumer. Airlines and hotels have loyalty programs to force us to be loyal because we don’t feel an emotional connection that would make us choose them. It’s hard to see many people thinking of themselves as “Holiday Inn people." And if you’re marketing a practical product like Tide, you absolutely need to appeal to the unconscious mind, because the choice of what detergent to buy is usually made unconsciously. But a product like ours, which is about play and children and learning, can be more than just a safe or easy option. It can be a conscious statement of values or identity. Of course, children don’t go around thinking they’re LEGO people—they just want to play—but they can grow up to become LEGO people and have their own children. It’s a lot easier to persuade parents who got the LEGO habit themselves as kids, which is one reason we have to work harder in emerging markets, where we’re mostly talking to first-generation users. Does habit have a role in this emotional connection? Absolutely. Habit is how we build the connection. People develop habits by doing the same things repeatedly. But the habits eventually turn into values. If we teach our children to brush their teeth every night before they go to bed, in the beginning it’s just an action we force upon them. Then over time they feel uncomfortable going to bed without having brushed their teeth. Eventually they start to feel that brushing your teeth is the right thing to do. If you can make your brand a value—a part of someone’s identity—you have a really powerful competitive advantage. But it all begins with making your brand a habit. How does that work at the LEGO Group? You need to give people some simple routines they can practice to get habituated to building with the bricks. As the things they build become more complicated, they start developing their own habits and techniques. But they’ll be open to changing those routines if someone shows them a neater, simpler way. There are hundreds of LEGO conventions every year, all around the world, showcasing new ways of using existing elements that we as a company have never thought about. Go on YouTube, and you’ll see the amazing things people build. I’m sitting here with a structure of about 20 bricks, and I or anyone else could use the same 20 bricks to create millions of different structures. Building is a really fundamental need—we all want to make things that are our own—and we offer a platform for that. What do you have to do to get the process started? It’s all about keeping the basics simple and easy to get used to. A neuroscientist recently told me that our brains have the equivalent of 20 megabytes of RAM—enough to process just four photos on a smartphone. And it gets worse the older you get: Recent research out of NASA suggests that when people reach the age of 25, they’ve retained only about 5% of the capacity for creative thinking that they had when they were five. This means they’re easily overloaded. Let me illustrate how we manage that. We have a product line called LEGO Creator 3 in 1—kits that each offer instructions for building three models with the one kit. In fact you can make many more models with a single kit. Twelve years ago we would have offered 12 suggestions. But so much variety actually put people off. So we simplified it to just three. You also have to make the routines fun, because play is how kids learn and get habituated to new routines. Novelty—having the next cool thing—is important in the toy business. But habit is about predictability. How do you balance the two? You’re absolutely right that novelty is important for children, which is why 60% of the LEGO sets that we put on the market in any given year are new. But that’s not a trade-off with familiarity, because LEGO System of Play is a platform—perhaps the only toy in the world that offers a platform for play. Each new set not only brings 12 or so new possible models but also can be combined with what you already have to lift your total number of potential structures by orders of magnitude. This has a terrific network effect. If a child already has a LEGO set, getting another LEGO product is more valuable than just the incremental purchase—it extends his or her platform for play in an exponential way. And it’s infectious: The more kids who get the LEGO habit, the more others want to as well, and they can mix and match their LEGO collections. David Champion is a senior editor at HBR. Back to the Top In Practice “A product that lets people hold on to their habits"An interview with Intuit chairman and cofounder Scott CookInterviewDavid Champion Photography by Jeff SingerHBR: What role have consumer habits played in your company’s success? Cook: We paid really close attention to how people actually went about managing their personal finances before we created our first product—and we got the user interface to mimic those routines. Quicken was designed to look like a checkbook. And it wasn’t just about appearance—the interface operated like a check register. You put the next transaction at the bottom, for example, just as you do with a check register. Nobody else offered such a familiar interface. We followed people’s routines as we built out the product’s functionality. Back in 1984 people paid bills by check. So we made sure that Quicken could print checks on a printer very easily. That sounds straightforward now, but at the time, people were using the old continuous Epson inkjet printers, which made aligning check stock to print on the lines of the checks hugely difficult. We actually invented and patented an alignment technique to make printers do it correctly. No one else seems to have thought of doing that. What made you bet so heavily on consumer routines? It was Apple. I asked an employee there to show me a Lisa, and I saw that its desktop interface looked very much like the physical artifacts office people worked with—files and so forth. I remember leaving the Apple headquarters and driving to the closest restaurant just so I could sit down and take notes about what I’d observed that was so powerful in the design. “Later, at the launch of the Mac, I was struck by something Steve Jobs said about making computers as easy to use as a telephone. Think about that a minute. How easy was a phone to use back then? You had to memorize seven- or 10-digit numbers. If you dialed the wrong number, the system charged you, which was very expensive for long-distance, and then you had to start from scratch. If the line was busy, you got an unpleasant beeping sound. Phones had a really awful interface. So why did Jobs—a byword for elegant design—say he wanted to make Macs as easy as the telephone? Because everyone was used to it. Correct. Because of habit. People were used to typing these strings of seven or 10 digits. Now, whose habits do you pick? When we launched QuickBooks, for small businesses, accounting software was universally designed with accountants in mind, and users had to speak their language. But nine out of 10 small businesses had no accountant on staff—their books were kept by a civilian, who probably couldn’t tell a debit from a credit and didn’t want to learn. We discovered this by observing Quicken customers and decided to design the first accounting product with no accounting in it; we called it QuickBooks. Despite a flawed launch, we were the market leader in two months, because we had designed it to work with people’s habits. Now that a lot of small businesses use QuickBooks, their auditors have gotten used to it as well and are advising new clients to use it, producing a snowball effect. That’s all built on a product that lets people hold on to their habits. But don’t you find that habits change quickly in the digital space? Yes. People will adjust to a radical change if it offers a habit from some other context to relate to—particularly in the digital space. We’re now trying to make many habits—which are really just a way of simplifying your life—redundant. Take a typical freelancer, such as an Uber driver. Freelancers need to keep track of their expenses so that they can offset them against taxes. But it’s a pain to keep and categorize all your receipts and to list odometer readings and mileage for every business drive. Most freelancers don’t, and they lose thousands in tax deductions. So we’ve launched a service called QuickBooks Self-Employed that takes credit card and bank feeds and automatically categorizes expenses according to the merchant codes. It also automatically lists every time you drive, so you just swipe the business drives. Then, at tax time, all the information flows right to your tax preparer or tax software. If you’re making the habits that got you started redundant, where will your next advantage come from? It’s about network effects now, which is why we’re working on leveraging communities of users. With TurboTax, for example, we’re getting customers to answer people’s tax questions. We’ve created the largest and best source of answers on taxes—if you go to Google and put in a tax question, the link at the top will often be our answer. This is tapping a newer habit from the digital age: participating in online communities. But we couldn’t have created the community if we hadn’t worked with people’s routines in the first place. David Champion is a senior editor at Harvard Business Review.January–February 2017 IssueExplore the Archive

Picture 1

Why brands are abandoning mass-produced products - Marketing Week

Why brands are abandoning mass-produced products New manufacturing technology such as 3D printing means brands are swapping mass production for customisation. By Jonathan Bacon 7 Oct 20154:32 pm “Designed by Apple in California. Assembled in China." This is the statement on the back of every iPhone that neatly sums up how most major western brands approach manufacturing. Huge corporations retain their western sensibilities by basing their global design and marketing functions in their domestic markets, but keep costs low and reach emerging markets by outsourcing their supply chain to the Far East. Yet as China’s economy flounders, consumer attitudes shift and manufacturing technology advances, will growing market demand for more customised and locally made products lead to a move away from offshore mass production? Global markets are still readjusting to the dramatic crash in Chinese stocks over the summer, which culminated in an 8.5% drop in value of the Shanghai main share index on 24 August, partly driven by a steep decline in its manufacturing sector that serves brands around the world. In September, China’s factory activity fell at the fastest rate for six and a half years. Its effect could also have far-reaching consequences for how brands organise their global operations – and on the kinds of products reaching end consumers as a result.Brand risks from ChinaThere are already signs that certain brands are moving away from China to meet a rising demand from their home markets for locally made products. Last month, Reuters reported “a resurgence in British textile manufacturing" as output grew in the first half of this year. It noted that a series of clothes makers have opted to make Britain their production home, including Albion Knitting Company, a supplier to luxury brands such as LVMH and Gucci, which this year returned to London after 18 years in China. Other brands may consider abandoning Chinese production lines as the current market volatility reduces the prospect of sales growth from Chinese consumers. The devaluation of China’s currency in August, for example, resulted in a sharp fall in the share value of luxury British brand Burberry. The company made 37% of its revenues in the Asia-Pacific region last year, and much of that was in China, so the brand is heavily exposed to Chinese market conditions. Although 3D printing has not been adopted on a large scale yet, the technology is gaining traction with mass market brands such as Mattel BMW, meanwhile, experienced its first sales drop in China for a decade during the second quarter of this year. The company confirmed that the 4% year-on-year decline in May, following years of double-digit growth in China, had dragged down its overall performance. “If conditions on the Chinese market become more challenging, we cannot rule out a possible effect on the BMW Group’s outlook," it said in a statement.3D printing comes of ageIn addition to these economic challenges, emerging technologies such as 3D printing are enabling companies to make complex products in their domestic markets – thus reducing their reliance on Chinese factories. Although businesses have so far failed to adopt 3D printing on a large scale, the technology, which allows single products and parts to be manufactured quickly from materials including plastics and metals, is beginning to gain traction with mass market brands. Earlier this year, 3D design company Autodesk agreed a partnership with Mattel that will see the toy maker launch a series of apps enabling consumers to design and customise their own 3D-printed toys. Mattel has not yet set a date for when it will roll out the service, but the deal reflects a wider strategy by the company to move away from mass production in favour of personalised, direct-to-consumer services. “Technology is changing daily and by harnessing Mattel’s expertise in play and Autodesk’s expertise with creative apps and 3D printing, we’re able to offer a new kind of 3D design experience," claims Mattel senior vice-president Doug Wadleigh. During a presentation at the Dreamforce conference in San Francisco last month, Autodesk claimed that several factors are combining to gradually alter the way that businesses manage their global product development needs. These factors include the recently developed ability to 3D-print metal structures and the rise of crowdfunding as a way of quickly getting new products off the ground. Autodesk also highlighted the growth of hi-tech microfactories: small facilities that use the latest robotic and manufacturing technology to produce customised products at speed.The rise of microfactoriesThe move towards customisation is allowing companies to relocate certain elements of their production processes near to their domestic base. The result is that brands are able to not only design and manufacture goods tailored to the needs of the local market, but also to test new products with their target customers without committing to a large, expensive production run. Earlier this year, aerospace group Rolls-Royce unveiled a prototype metal engine part to mark the opening of a new £15m manufacturing unit at the UK government-backed Manufacturing Technology Centre (MTC) in Coventry. American conglomerate General Electric launched FirstBuild last year – a community for designing the next generation of home appliances The MTC, which opened in 2011, provides a collaborative environment for companies and academia to develop new manufacturing solutions by experimenting with the latest technology. In addition to Rolls-Royce, the centre counts Airbus, Nikon and Unilever among its 85 member companies. Similarly, in the US, technology and healthcare conglomerate General Electric (GE) last year launched FirstBuild: a co-creation community for designing “the next generation of major home appliances". “The challenge [for GE] is breakthrough innovation," says Wayne Davis, innovation and marketing leader at GE Appliances. “We know how to make washers and dryers, car companies know how to make cars – but if it’s something that’s truly unique, making a decision to invest millions of dollars and make hundreds of thousands of something is tough. “The idea here is to get ideas from everywhere, build it at a smaller scale and test it in the market. That makes it a lot easier to scale up." The FirstBuild community is a joint venture between GE and Local Motors, a vehicle maker specialising in micro-manufacturing. It includes an online portal where designers can come together to share and discuss ideas for new products as well as a microfactory based in Louisville, Kentucky, where engineers build the products using the latest tech, including 3D printing (see case study, bottom). This setup allows GE to experiment with its product development – particularly in niche areas such as connected appliances – thus removing the danger that it will invest heavily in mass producing an unpopular or unsuitable product. FirstBuild has so far created and released nine products for sale to the general public, including a smart cooktop and smart refrigerator. The online community has 8,000 registered members, with those who contribute original ideas receiving royalties if their products make it to market.Supply chain conundrumAs the name suggests, a microfactory is limited by its production capacity. FirstBuild’s facility, for example, only manufactures up to 1,000 items during any single product run. If the product is selected for the mass market, it is moved to GE Appliances’ regular factories, which are located in the US. This is a source of pride for GE and is exploited by the company’s marketing, which claims it is “revitalising manufacturing in the United States" by keeping its production at home. However, for most major brands China is still the go-to place for their everyday manufacturing needs. Although 3D printing is starting to play a bigger role in the product development phase, the technology is limited to making a relatively small range of finished parts and is primarily used for prototypes rather than final products. Peter Williamson, professor at the University of Cambridge’s Judge Business School, believes that the idea of global companies “reshoring" production from China has been hugely overblown. He notes that many global brands are reliant on China for their entire supply chain given the lack of equivalent infrastructure in many western countries. “Very little of Chinese manufacturing is pure assembly — it is a whole swathe of supply chain activities," he says. “The oft-quoted study of the iPhone having minimal value added [in China] is now grossly out of date; a whole range of the core components including microphone, antennae and screen are made in China." American brand Shinola runs its manufacturing facility from a Detroit design school, which enables it to develop new products quickly Williamson accepts that production costs are rising in China – partly as a result of a government policy, beginning in 2013, to increase the minimum wage by 84% over five years. But he also notes that workforce productivity is surging and that despite its recent stock market troubles, China remains the world’s second largest economy and the largest market for many global products. “Why would anyone reshore production back to the west to re-export [the products] back to China?" he asks. “There will, of course, be limited reshoring of manufacturing that is extremely time-sensitive or customised, using technologies such as 3D printing. “Some luxury products and some foods that are basically marketed on their foreign provenance will also be made in Europe. But I believe these will be the exceptions – the large volume manufacturing will stay in China because that is where the scale, the supply chain and the market is."The home-made appealHowever, if China is still the dominant centre for large-scale manufacturing, it is not necessarily an attractive destination for small companies or startups seeking to retain control of their manufacturing processes. Dublin Design Studio, a new company set up last year in the city’s Docklands Innovations Park, has opted to keep all of its manufacturing in Ireland as it prepares to launch its first product Scriba – a stylus for Apple mobile devices. The product was partly funded via the crowdfunding website Kickstarter with further support from New Frontiers, an Irish government-backed business incubation programme. Chief executive David Craig explains that he decided to keep production in Ireland – on a limited run to begin with – in order to refine the product during the early stages of the launch. “Although we might have saved some money going straight to the Far East, being able to have control and have people on the end of a phone who can help us fix things – rather than having to jump on a plane to resolve a problem – will make it much smoother for us," he says. For American bicycle, watch and leather goods maker Shinola, meanwhile, the manufacturing base is inextricably tied to its brand values and purpose. The company was set up in Detroit in 2011 and runs both its head office and its manufacturing facility from the city’s College for Creative Studies. The brand aims to challenge the notion of Detroit as an unviable economic centre – an image created by the 2008 global financial crash and the woes of the American car industry based in the city – by bringing back manufacturing and highly-skilled jobs. The company also fosters a collaborative approach to manufacturing through a partnership with Ronda, an established Swiss watchmaker that helps to train Shinola’s Detroit-based staff. Basing all of its operations within the college enables Shinola to develop and roll-out new products at high speed, says CMO Bridget Russo. The company is planning to expand into new product areas next year, including audio equipment, and also intends to move into a larger London store after opening its first shop in the UK in 2014. “As far as we know, we’re the only instance where we have both manufacturing and the headquarters of a brand all based within a design school," she says. “Because design and prototyping all happen here, our designers can literally come in with an idea in the morning and have a prototype of that product at the end of the day." While Russo notes that some people will simply buy Shinola products because they like them, she believes that many others are drawn to the brand because of its approach to manufacturing and its Detroit heritage. This was crucial to building the brand’s story and attracting early adopters, she explains, and will remain central to its marketing messages on both online and offline channels in the future. “Even if consumers don’t have a connection to Detroit, it makes a difference knowing that the products have a strong story behind them and that by buying a Shinola product they’re helping to create jobs in Detroit," she says. “But it’s also a beautiful product, so all those things factor into the desirability of the brand." Case study: GE FirstBuildGeneral Electric’s (GE) co-creation community FirstBuild, which launched last year, is based on the idea that crowdsourcing is the best route to “breakthrough innovation". The online presence for the community, FirstBuild.com, works like a social network where design enthusiasts can share, discuss and rate each other’s ideas for new home appliances. This includes uploading sketches and detailed plans for their designs. Ideas that gain the most traction on the site are taken by GE, in partnership with micro manufacturing specialist Local Motors, and turned into products in a hi-tech micro factory based in Louisville, Kentucky. Anyone with an idea can sign up to the community, which so far has 8,000 registered members. Contributors are also invited to get involved in the production process at the microfactory, which incorporates the latest 3D printing, laser and robotic technology. “We realised that there are people out there with great ideas beyond advanced manufacturing engineers," says Wayne Davis, innovation and marketing leader at GE Appliances. “We have home enthusiasts, tinkerers, hackers and regular consumers who have great ideas about what they would want in their kitchen or their laundry room, and we’re giving them a platform to submit those ideas." GE has so far created nine home appliances for sale to the general public via FirstBuild. These are being sold online, as well as through GE Appliances’ own retail channels, with those who contribute the idea receiving a royalty payment. FirstBuild has also generated interest in its manufacturing projects via the crowdfunding website Indiegogo and by putting on ‘hackathon’ events for the local tech community. Davis claims that numerous large manufacturing companies have taken an interest in the FirstBuild model as they look to find their own ways of designing and making innovative products at greater speed. “With the major manufacturers, we probably give a tour per week to somebody," he says. “This is unique in the manufacturing world."

Picture 1

Enrico Moretti: The Geography of Jobs

Enrico Moretti: The Geography of Jobs Silicon Valley has kept reinventing itself in ways that are remarkable. (Flickr) Why are some places more prosperous than others? A scholar explores the “brain hubs" phenomenon. Americans frequently debate why wages are growing for the college-educated but declining for those with less education. What is less well-known is that communities and local labor markets are also diverging economically at an accelerating rate. A closer look at the 300-plus metropolitan areas of the United States shows that Americans with high school degrees who work in communities dominated by innovative industries actually make more, on average, than the college graduates working in communities dominated by manufacturing industries, according to research by University of California, Berkeley economist Enrico Moretti, the author of The New Geography of Jobs, a book that Forbes magazine called "easily the most important read of 2012." In the San Jose metropolitan area, for example, a high school graduate averages $68,009, compared with the $65,411 that is average for a college graduate in Bakersfield, Calif. Some places have always been more prosperous than others, but these differences have increased more rapidly over the last 30 years as the gross domestic product and patents for new technologies have concentrated in two to three dozen communities that Moretti identifies as "brain hubs" or "innovation clusters." In these clusters, highly specialized innovation workers, such as engineers and designers, generate about three times as many local jobs for service workers — such as doctors, carpenters, and waitresses — as do manufacturing workers, Moretti said recently when speaking at Stanford Graduate School of Business. Here are edited excerpts from Moretti's answers to questions from the Stanford audience.What causes clusters to emerge?This is a very active area of research, but I think fundamentally, there are three major reasons why clustering takes place. One is the thick labor market effect. If you are in a very highly specialized position, you want to be in a labor market where there are a lot of employers looking for workers, and a lot of workers looking for employers. The match between employer and employee tends to be more productive, more creative and innovative in thicker labor markets. It is the same thing for the vendors, the providers of intermediate services. Companies in the Silicon Valley will find very specialized IP lawyers, lab services, and shipping services that focus on that niche of the industry. And because they are so specialized, they're particularly good at what they're doing. The third factor is what economists call human capital spillovers — the fact that people learn from their colleagues, random encounters in a coffee shop, at a party, from their children, and so on. There's a lot of sociological evidence that this is one of the attractions of Silicon Valley. You're always near other people who are at the frontier, so you tend to exchange information. Sometimes it's information about job openings. Sometimes it's information about what you're doing, what type of technology you're adopting, what type of research you are doing. And this, as you can imagine, is important for R&D, for innovation. So these three forces are crucial, and that means that localities that already have a lot of innovation tend to attract even more workers and even more employers. That further strengthens their virtuous circle.Are these clusters sustainable forever?Probably not. Previous clusters have collapsed in spectacular ways. The Silicon Valley of the 1950s was Detroit. People have researched the rise of Detroit, and it mimics very well the rise of Silicon Valley in terms of the amount of innovation, the type of engineering, the type of salaries they were paying. In the 1950s, if you were a car engineer, there wasn't any better place in the world to be, and if you were a car company, you had to be there. But then, of course, it collapsed. In my book, I have a chapter on the difference between Detroit and Silicon Valley. This region has kept reinventing itself in ways that are remarkable. It was all orchards, and then it became all hardware, and then it became all software. And now it's becoming something else: social media and biotech and clean tech. Some types of clusters don't survive big negative shocks, and other clusters are able to leverage themselves into the next thing.Is there a clean energy cluster that is structurally different from an internet or an IT or a biotech cluster? Or are they all intermingled?Typically, clusters are very specialized. Silicon Valley is the exception in the sense that there are so many different technologies. More typical examples are Boise, Idaho, for radio technology or Portland, Oregon, for semiconductors. Seattle has a combination of software and now a growing body of life sciences. Boston is mostly life science. D.C. is a remarkable story. It's very diversified now in terms of private-sector innovation, but most clusters are going to be small pockets of one industry.Does your argument hold for high-paid but non-high-tech sectors? I was thinking of New York being a financial sector or L.A. being entertainment, and Houston being oil and gas. Then you mentioned Washington, D.C. That's government.I would argue that three you mentioned would belong to what I define as innovation sectors in the following sense: Finance in New York is not bank tellers; it's people who invent new products, new technology, and new ways of making things. They are unique, and you can't easily reproduce the cluster somewhere else. That certainly applies to entertainment, especially the digital part of entertainment that is the fastest-growing part of entertainment jobs. It also applies to the D.C. cluster. The growth of D.C. over the last 20 years is mostly driven by private-sector headquarters moving there, and an educated labor force. Some of the companies are military contractors. Some companies are life science. They're anchored by the National Institutes of Health being there, and other government agencies. But most of the growth actually comes from the private sector. Now oil, Houston, I'm not sure. I don't know how strong these clustering forces are for these type of jobs. I would imagine — and we're not talking about the guy who drills, but it's more like the guy who plans where to drill — to the extent that there is a high component of innovation that makes something that is unique, I would say it applies.If I'm a high-tech worker, how am I responsible for creating five other jobs? It's hard for me to accept there are five.The way to interpret the multiplier is to imagine dropping 1,000 innovation jobs in one city but not in another, and then going back 10 years later to measure how many additional local service jobs there are in the city that experienced that innovation-sector drop of jobs. So it's a long-run effect, but it's not impossible for three reasons. One is that the average high-tech worker tends to do very, very well, and people who are wealthy tend to spend a large fraction of their salary on personal and local services. They tend to go to restaurants and movies, and to use taxis and therapists and doctors on average more than people who are paid less. The second reason is high-tech companies themselves employ a lot of local services; everything from security guards to IP lawyers, from the janitor to the very specialized consultant. High-tech companies tend to use more services than manufacturing companies. The third reason is the clustering effect. Once you attract one of those high-tech workers, then in the medium to long run, you're going to be attracting even more of those high-tech workers and companies, which will further increase your multiplier. So it's a long-run number, measured over a 10-year period.You pointed out that the salaries of the less-educated part of the local population are higher in those places that do have a lot of the innovation. How is that reconciled with the drastic drop over 30 years in their national average compensation?We don't have enough brain hubs where innovation is concentrated. We have 320 metro areas in the U.S., and probably, by my definition, we have 15 to 20 brain hubs. In those places, you have brisk job creation outside the innovation sector, and you have decent wages for people outside. But we also have a big chunk of the country producing not very much, in part because manufacturing jobs have been shrinking, and innovation hasn't really taken place.So what hope is there for these areas?That's a million-dollar question. It's tough because, in some sense, if this clustering effect is particularly strong, it's good news for places like here, but it's terrible news for places like Flint or Detroit. A successful local labor market has a very nice equilibrium, where you have a lot of skilled workers who want to go there and a lot of innovative employers who want to go there. It's really hard to re-create somewhere else. And it's not like we're not trying. We're spending $15 to $18 billion annually in what economists call place-based policies, which are essentially subsidies to try to attract employers to these areas. The idea being: "They're not coming, so if we just break this vicious circle, if we just bring some, then the clustering effect starts taking off. We can effectively create innovation hubs where they don't exist." I haven't found one example of an innovation hub in the U.S. that has been created by deliberate policy that says, "We're going to create an innovation hub here." Taiwan might be a good success story. It's hard to get data, but Taiwan was an agricultural economy in the 1960s that had very little innovation. Then in the 1970s, it created enormous government subsidies for semiconductors and a lot of other technologies. All the others didn't pan out, but semiconductors worked. Taiwan is still putting money in, so it's not exactly clear whether it's a perfect example. Picking the next big thing is very hard for the venture capitalist. It's virtually impossible for the government worker.What's the situation in other regions around the world?Obviously, India and China are major success stories, but that doesn't mean that this clustering effect is not at play within those countries. A different example is Italy, where I am from. Italy has been the Detroit in this story. It had a very strong pharmaceutical sector in the 1980s, and a smaller computer cluster. Once the pharmaceutical industry started becoming global, you saw mergers and a concentration of the industry's R&D in a few places. I know because my dad was employed there, and his lab was first moved to Sweden and then to New Jersey. I think the same is happening throughout many countries in continental Europe, and even in places like China and India, which have success stories but enormous regional differences. The innovative part of the Chinese economy is concentrated in a handful of megalopolises. This is an interesting paradox of the current economy. Probably the best news of the last 20 years globally is the vast increase in the standard of living in places like China and India and Brazil, so there's certainly been a convergence in the standard of living when you compare nations. But when you look within those developing nations, you see the same great divergence that you see here. Enrico Moretti is professor of economics at the University of California, Berkeley. His talk at Stanford was hosted by the Stanford Program on Regions of Innovation and Entrepreneurship, located in Stanford GSB.

Picture 1

"Religion is a system of human norms and values that is founded on belief in a superhuman order.

"Religion is a system of human norms and values that is founded on belief in a superhuman order. The theory of relativity is not a religion, because (at least so far) there are no human norms and values that are founded on it. Football is not a religion because nobody argues that its rules reflect superhuman edicts. Islam, Buddhism and Communism are all religions, because all are systems of human norms and values that are founded on belief in a superhuman order. (Note the difference between ‘superhuman’ and ‘supernatural’. The Buddhist law of nature and the Marxist laws of history are superhuman, since they were not legislated by humans. Yet they are not supernatural.)" from "Sapiens: A Brief History of Humankind" by Yuval Noah Harari http://a.co/eFGsM2w

Picture 1

In Neanderthal DNA, Signs of a Mysterious Human Migration

Scientists who study ancient genes search for two kinds of genetic material. The vast majority of our genes are in a pouch in each cell called the nucleus. We inherit so-called nuclear DNA from both parents.But we also carry a small amount of DNA in the fuel-generating factories of our cells, called mitochondria. We inherit mitochondrial DNA only from our mothers, because a father’s sperm destroys its own mitochondrial DNA during fertilization.

Picture 1

"Understanding human history in the millennia following the Agricultural Revolution boils down to a single question: how did humans organise themselves in mass-cooperation networks, when they lacked the biological instincts necessary to sustain such netwo

"Understanding human history in the millennia following the Agricultural Revolution boils down to a single question: how did humans organise themselves in mass-cooperation networks, when they lacked the biological instincts necessary to sustain such networks? The short answer is that humans created imagined orders and devised scripts. These two inventions filled the gaps left by our biological inheritance." from "Sapiens: A Brief History of Humankind" by Yuval Noah Harari

Picture 1

"And this is not the end of the story.

"And this is not the end of the story. The field of artificial intelligence is seeking to create a new kind of intelligence based solely on the binary script of computers. Science-fiction movies such as The Matrix and The Terminator tell of a day when the binary script throws off the yoke of humanity. When humans try to regain control of the rebellious script, it responds by attempting to wipe out the human race." from "Sapiens: A Brief History of Humankind" by Yuval Noah Harari

Picture 1

UN says world population will reach 9.8 billion in 2050

UN says world population will reach 9.8 billion in 2050 June 22, 2017 by Edith M. Lederer Credit: CC0 Public Domain India's population is expected to surpass China's in about seven years and Nigeria is projected to overtake the United States and become the third most populous country in the world shortly before 2050, a U.N. report said Wednesday. The report by the Department of Economic and Social Affairs' Population Division forecasts that the current world population of nearly 7.6 billion will increase to 8.6 billion by 2030, 9.8 billion in 2050 and 11.2 billion in 2100. It said roughly 83 million people are added to the world's population every year and the upward trend is expected to continue even with a continuing decline in fertility rates, which have fallen steadily since the 1960s. John Wilmoth, director of the Population Division, said at a news conference that the report includes information on the populations of 233 countries or areas of the world. "The population in Africa is notable for its rapid rate of growth, and it is anticipated that over half of global population growth between now and 2050 will take place in that region," he said. "At the other extreme, it is expected that the population of Europe will, in fact, decline somewhat in the coming decades." The U.N. agency forecasts that from now through 2050 half the world's population growth will be concentrated in just nine countries—India, Nigeria, Congo, Pakistan, Ethiopia, Tanzania, United States, Uganda and Indonesia. Those nations are listed in the order of their "expected contribution to total growth," the report said. During the same period, it added, the populations of 26 African countries are expected to at least double. Nigeria, currently the world's seventh largest country, has the fastest growing population of the 10 most populous countries worldwide, and the report projects it will surpass the U.S. shortly before mid-century. The new projections also forecast that China, which currently has 1.4 billion inhabitants, will be replaced as the world's most populous country around 2024 by India, which now has 1.3 billion inhabitants. The report, titled "The World Population Prospects: The 2017 Revision," said fertility has been declining in nearly all regions in recent years. Between 2010 and 2015, Wilmoth said, "the world's women had 2 1/2 births per woman over a lifetime—but this number varies widely around the world." "Europe has the lowest fertility level, estimated at 1.6 births per woman in the most recent period, while Africa has the highest fertility, with around 4.7 births per woman," he said. The report said birth rates in the 47 least developed countries remain relatively high, with population growth around 2.4 percent a year. While this rate is expected to slow significantly in the coming decades, the U.N. said the combined population of the 47 countries is projected to increase by 33 percent from roughly 1 billion now to 1.9 billion in 2050. More and more countries now have fertility rates below the level of roughly 2.1 births per woman needed to replace the current generation, the report said. During the 2010-2015 period, fertility was below the replacement level in 83 countries comprising 46 percent of the world's population, it said. The 10 most populous countries with low fertility levels are China, United States, Brazil, Russia, Japan, Vietnam, Germany, Iran, Thailand and United Kingdom, the report said. In addition to slowing population growth, low fertility levels lead to an older population, the report noted. It forecasts that the number of people aged 60 or above will more than double from the current 962 million to 2.1 billion in 2050 and more than triple to 3.1 billion in 2100. A quarter of Europe's population is already aged 60 or over, and that share is projected to reach 35 percent in 2050 then remain around that level for the rest of the century, the report said. Explore further: World population likely to surpass 11 billion in 2100 More information: Online: esa.un.org/unpd/wpp © 2017 The Associated Press. All rights reserved.

Picture 1

Software on Mars rover allows it to pick research targets autonomously

Software on Mars rover allows it to pick research targets autonomously June 22, 2017 by Bob Yirka report Taking only 21,000 of the Curiosity mission’s total 3.8 million lines of code, AEGIS accurately selected desired targets over 2.5 kilometers of unexplored Martian terrain 93% of the time, compared to the 24% expected without the software. In this case, the desired target was outcrop, a type of Martian rock that’s ideal for analyzing the red planet’s geological history. Credit: Carla Schaffer / AAAS (Phys.org)—A team of researchers form the U.S., Denmark and France has created a report regarding the creation and use of software meant to give exploratory robots in space more autonomy. In their paper published in the journal Science Robotics, the team describes the software, called Autonomous Exploration for Gathering Increased Science (AEGIS), and how well it performed on the Mars rover Curiosity. Because of their limited computing power and distance from the Earth, space scientists believe that it would be advantageous for exploratory robots to have the ability to select which things to study. It would also allow for more research to be done when a robot is not able to communicate with Earth, such as when it is on the opposite face of a planet. Without such a system, a robot would have to scan a region, photograph it, send the photographic images back to Earth and then wait for instructions on what to do. With such a system, a robot such as Curiosity could scan the horizon, pick an object to study and then drive over and study it. This approach would save a lot of time, allowing the robot to study more objects before its useful lifespan expires. Because of that, NASA commissioned a team to create such software, which eventually became AEGIS. The software was tested and then uploaded to Curiosity in May of 2016 and was used 54 times over the next 11 months. The software allows the rover to control what has been dubbed the ChemCam, which is a device that is used to study rocks or other geologic features—a laser is fired at a target and then sensors measure the gases that occur as a result. An animated representation of the Mars Curiosity rover. Its AEGIS software directs ChemCam to laser desired geological targets. Credit: NASA/JPL-Caltech The researchers report that they found the system to be 93 percent accurate compared to 24 percent without its use. The software, they claim, saved many hours of mission time, which was used for engaging in other useful activities such as studying meteorite content. They also report that the software allowed for an increase in ChemCam targeting from 256 per day to 327, which meant that more data was collected in the same amount of time. (A) The ChemCam gaze. (B) ChemCam shoots lasers at rocks to analyze their content, leaving visible marks both on the surface (upper right) and inside the 16-mm-diameter drill hole (center) of this “Windjana" drill site. (C) ChemCam-measured soil targets. (D) The Remote Micro-Imager on ChemCam shoots high-focus photos of distant targets, such as this area in the Peace Vallis alluvial fan, approximately 25 km away. Credit: Francis et al., Sci. Robot. 2, eaan4582 (2017) Examples of AEGIS target selection, collected from Martian day 1400 to 1660. Targets outlined in blue were rejected; those outlined in red were retained. Top-ranked targets are shaded green, and second-ranked targets are shaded orange. Credit: Francis et al., Sci. Robot. 2, eaan4582 (2017) Examples of AEGIS fixing human commands that miss the mark, called “autonomous pointing refinement." (A, C) Human-calculated targets in red. (B, D) Target refinement by AEGIS indicated in red. Credit: Francis et al., Sci. Robot. 2, eaan4582 (2017) Explore further: Curiosity Mars rover can choose laser targets on its own More information: AEGIS autonomous targeting for ChemCam on Mars Science Laboratory: Deployment and results of initial science team use, Science Robotics (2017). robotics.sciencemag.org/lookup/doi/10.1126/scirobotics.aan4582 Abstract Limitations on interplanetary communications create operations latencies and slow progress in planetary surface missions, with particular challenges to narrow–field-of-view science instruments requiring precise targeting. The AEGIS (Autonomous Exploration for Gathering Increased Science) autonomous targeting system has been in routine use on NASA's Curiosity Mars rover since May 2016, selecting targets for the ChemCam remote geochemical spectrometer instrument. AEGIS operates in two modes; in autonomous target selection, it identifies geological targets in images from the rover's navigation cameras, choosing for itself targets that match the parameters specified by mission scientists the most, and immediately measures them with ChemCam, without Earth in the loop. In autonomous pointing refinement, the system corrects small pointing errors on the order of a few milliradians in observations targeted by operators on Earth, allowing very small features to be observed reliably on the first attempt. AEGIS consistently recognizes and selects the geological materials requested of it, parsing and interpreting geological scenes in tens to hundreds of seconds with very limited computing resources. Performance in autonomously selecting the most desired target material over the last 2.5 kilometers of driving into previously unexplored terrain exceeds 93% (where ~24% is expected without intelligent targeting), and all observations resulted in a successful geochemical observation. The system has substantially reduced lost time on the mission and markedly increased the pace of data collection with ChemCam. AEGIS autonomy has rapidly been adopted as an exploration tool by the mission scientists and has influenced their strategy for exploring the rover's environment. © 2017 Phys.org

Picture 1

Deep Learning & Inference

What’s the Difference Between Deep Learning Training and Inference? This is the second of a multi-part series explaining the fundamentals of deep learning by long-time tech journalist Michael Copeland. School’s in session. That’s how to think about deep neural networks going through the “training" phase. Neural networks get an education for the same reason most people do — to learn to do a job.More specifically, the trained neural network is put to work out in the digital world using what it has learned — to recognize images, spoken words, a blood disease, or suggest the shoes someone is likely to buy next, you name it — in the streamlined form of an application. This speedier and more efficient version of a neural network infers things about new data it’s presented with based on its training. In the AI lexicon this is known as “inference." Inference is where capabilities learned during deep learning training are put to work.Inference can’t happen without training. Makes sense. That’s how we gain and use our own knowledge for the most part. And just as we don’t haul around all our teachers, a few overloaded bookshelves and a red-brick schoolhouse to read a Shakespeare sonnet, inference doesn’t require all the infrastructure of its training regimen to do its job well.So let’s break down the progression from training to inference, and in the context of AI how they both function.Training Deep Neural Networks Just as we don’t haul around all our teachers, a few overloaded bookshelves and a red-brick schoolhouse to read a Shakespeare sonnet, inference doesn’t require all the infrastructure of its training regimen to do its job well.While the goal is the same – knowledge — the educational process, or training, of a neural network is (thankfully) not quite like our own. Neural networks are loosely modeled on the biology of our brains — all those interconnections between the neurons. Unlike our brains, where any neuron can connect to any other neuron within a certain physical distance, artificial neural networks have separate layers, connections, and directions of data propagation.When training a neural network, training data is put into the first layer of the network, and individual neurons assign a weighting to the input — how correct or incorrect it is — based on the task being performed.In an image recognition network, the first layer might look for edges. The next might look for how these edges form shapes — rectangles or circles. The third might look for particular features — such as shiny eyes and button noses. Each layer passes the image to the next, until the final layer and the final output determined by the total of all those weightings is produced.But here’s where the training differs from our own. Let’s say the task was to identify images of cats. The neural network gets all these training images, does its weightings and comes to a conclusion of cat or not. What it gets in response from the training algorithm is only “right" or “wrong."Training Is Compute IntensiveAnd if the algorithm informs the neural network that it was wrong, it doesn’t get informed what the right answer is. The error is propagated back through the network’s layers and it has to guess at something else. In each attempt it must consider other attributes — in our example attributes of “catness" — and weigh the attributes examined at each layer higher or lower. Then it guesses again. And again. And again. Until it has the correct weightings and gets the correct answer practically every time. It’s a cat. Training can teach deep learning networks to correctly label images of cats in a limited set, before the network is put to work detecting cats in the broader world.Now you have a data structure and all the weights in there have been balanced based on what it has learned as you sent the training data through. It’s a finely tuned thing of beauty. The problem is, it’s also a monster when it comes to consuming compute. Andrew Ng, who honed his AI chops at Google and Stanford and is now chief scientist at Baidu’s Silicon Valley Lab, says training one of Baidu’s Chinese speech recognition models requires not only four terabytes of training data, but also 20 exaflops of compute — that’s 20 billion billion math operations — across the entire training cycle. Try getting that to run on a smartphone.That’s where inference comes in.Congratulations! Your Neural Network Is Trained and Ready for InferenceThat properly weighted neural network is essentially a clunky, massive database. What you had to put in place to get that sucker to learn — in our education analogy all those pencils, books, teacher’s dirty looks — is now way more than you need to get any specific task accomplished. Isn’t the point of graduating to be able to get rid of all that stuff?If anyone is going to make use of all that training in the real world, and that’s the whole point, what you need is a speedy application that can retain the learning and apply it quickly to data it’s never seen. That’s inference: taking smaller batches of real-world data and quickly coming back with the same correct answer (really a prediction that something is correct).While this is a brand new area of the field of computer science, there are two main approaches to taking that hulking neural network and modifying it for speed and improved latency in applications that run across other networks.How Inferencing Works How is inferencing used? Just turn on your smartphone. Inferencing is used to put deep learning to work for everything from speech recognition to categorizing your snapshots.The first approach looks at parts of the neural network that don’t get activated after it’s trained. These sections just aren’t needed and can be “pruned" away. The second approach looks for ways to fuse multiple layers of the neural network into a single computational step.It’s akin to the compression that happens to a digital image. Designers might work on these huge, beautiful, million pixel-wide and tall images, but when they go to put it online, they’ll turn into a jpeg. It’ll be almost exactly the same, indistinguishable to the human eye, but at a smaller resolution. Similarly with inference you’ll get almost the same accuracy of the prediction, but simplified, compressed and optimized for runtime performance.What that means is we all use inference all the time. Your smartphone’s voice-activated assistant uses inference, as does Google’s speech recognition, image search and spam filtering applications. Baidu also uses inference for speech recognition, malware detection and spam filtering. Facebook’s image recognition and Amazon’s and Netflix’s recommendation engines all rely on inference.GPUs, thanks to their parallel computing capabilities — or ability to do many things at once — are good at both training and inference.Systems trained with GPUs allow computers to identify patterns and objects as well as — or in some cases, better than — humans (see “Accelerating AI with GPUs: A New Computing Model").After training is completed, the networks are deployed into the field for “inference" — classifying data to “infer" a result. Here too, GPUs — and their parallel computing capabilities — offer benefits, where they run billions of computations based on the trained network to identify known patterns or objects.You can see how these models and applications will just get smarter, faster and more accurate. Training will get less cumbersome, and inference will bring new applications to every aspect of our lives. It seems the same admonition applies to AI as it does to our youth — don’t be a fool, stay in school. Inference awaits.

Picture 1

Quantum Computing - Around the Corner

Practical Quantum Computers Advances at Google, Intel, and several research groups indicate that computers with previously unimaginable power are finally within reach. by Russ Juskalian One of the labs at QuTech, a Dutch research institute, is responsible for some of the world’s most advanced work on quantum computing, but it looks like an HVAC testing facility. Tucked away in a quiet corner of the applied sciences building at Delft University of Technology, the space is devoid of people. Buzzing with resonant waves as if occupied by a swarm of electric katydids, it is cluttered by tangles of insulated tubes, wires, and control hardware erupting from big blue cylinders on three and four legs. Inside the blue cylinders—essentially supercharged refrigerators—spooky quantum-mechanical things are happening where nanowires, semiconductors, and superconductors meet at just a hair above absolute zero. It’s here, down at the limits of physics, that solid materials give rise to so-called quasiparticles, whose unusual behavior gives them the potential to serve as the key components of quantum computers. And this lab in particular has taken big steps toward finally bringing those computers to fruition. In a few years they could rewrite encryption, materials science, pharmaceutical research, and artificial intelligence. Every year quantum computing comes up as a candidate for this Breakthrough Technologies list, and every year we reach the same conclusion: not yet. Indeed, for years qubits and quantum computers existed mainly on paper, or in fragile experiments to determine their feasibility. (The Canadian company D-Wave Systems has been selling machines it calls quantum computers for a while, using a specialized technology called quantum annealing. The approach, skeptics say, is at best applicable to a very constrained set of computations and might offer no speed advantage over classical systems.) This year, however, a raft of previously theoretical designs are actually being built. Also new this year is the increased availability of corporate funding—from Google, IBM, Intel, and Microsoft, among others—for both research and the development of assorted technologies needed to actually build a working machine: microelectronics, complex circuits, and control software. The project at Delft, led by Leo Kouwenhoven, a professor who was recently hired by Microsoft, aims to overcome one of the most long-standing obstacles to building quantum computers: the fact that qubits, the basic units of quantum information, are extremely susceptible to noise and therefore error. For qubits to be useful, they must achieve both quantum superposition (a property something like being in two physical states simultaneously) and entanglement (a phenomenon where pairs of qubits are linked so that what happens to one can instantly affect the other, even when they’re physically separated). These delicate conditions are easily upset by the slightest disturbance, like vibrations or fluctuating electric fields. This blue refrigerator gets down to just above absolute zero, making quantum experiments possible on tiny chips deep inside it. In subsequent photos are scenes from the Delft lab where the experiments are prepared.People have long wrestled with this problem in efforts to build quantum computers, which could make it possible to solve problems so complex they exceed the reach of today’s best computers. But now Kouwenhoven and his colleagues believe the qubits they are creating could eventually be inherently protected—as stable as knots in a rope. “Despite deforming the rope, pulling on it, whatever," says Kouwenhoven, the knots remain and “you don’t change the information." Such stability would allow researchers to scale up quantum computers by substantially reducing the computational power required for error correction.Kouwenhoven’s work relies on manipulating unique quasiparticles that weren’t even discovered until 2012. And it’s just one of several impressive steps being taken. In the same lab, Lieven Vandersypen, backed by Intel, is showing how quantum circuits can be manufactured on traditional silicon wafers. What Is a Quantum Computer? At the heart of quantum computing is the quantum bit, or qubit, a basic unit of information analogous to the 0s and 1s represented by transistors in your computer. Qubits have much more power than classical bits because of two unique properties: they can represent both 1 and 0 at the same time, and they can affect other qubits via a phenomenon known as quantum entanglement. That lets quantum computers take shortcuts to the right answers in certain types of calculations.Quantum computers will be particularly suited to factoring large numbers (making it easy to crack many of today’s encryption techniques and probably providing uncrackable replacements), solving complex optimization problems, and executing machine-learning algorithms. And there will be applications nobody has yet envisioned.Soon, however, we might have a better idea of what they can do. Until now, researchers have built fully programmable five-qubit computers and more fragile 10- to 20-qubit test systems. Neither kind of machine is capable of much. But the head of Google’s quantum computing effort, Harmut Neven, says his team is on target to build a 49-qubit system by as soon as a year from now. The target of around 50 qubits isn’t an arbitrary one. It’s a threshold, known as quantum supremacy, beyond which no classical supercomputer would be capable of handling the exponential growth in memory and communications bandwidth needed to simulate its quantum counterpart. In other words, the top supercomputer systems can currently do all the same things that five- to 20-qubit quantum computers can, but at around 50 qubits this becomes physically impossible.All the academic and corporate quantum researchers I spoke with agreed that somewhere between 30 and 100 qubits—particularly qubits stable enough to perform a wide range of computations for longer durations—is where quantum computers start to have commercial value. And as soon as two to five years from now, such systems are likely to be for sale. Eventually, expect 100,000-qubit systems, which will disrupt the materials, chemistry, and drug industries by making accurate molecular-scale models possible for the discovery of new materials and drugs. And a million-physical-qubit system, whose general computing applications are still difficult to even fathom? It’s conceivable, says Neven, “on the inside of 10 years." MIT Technology Review https://www.technologyreview.com/s/603495/10-breakthrough-technologies-2017-practical-quantum-computers/

Picture 1

What is the Google Knowledge Graph?

Google Launches Knowledge Graph To Provide Answers, Not Just Links Danny Sullivan on May 16, 2012 at 1:00 pm Hinted at for months, Google formally launched its “Knowledge Graph" today. The new technology is being used to provide popular facts about people, places and things alongside Google’s traditional results. It also allows Google to move toward a new way of searching not for pages that match query terms but for “entities" or concepts that the words describe.Google's Knowledge Graph? “Graph" is a technical term used to describe how a set of objects are connected. Google has used a “link graph" to model how pages link to each other, in order to help determine which are popular and relevant for particular searches. Facebook has used a “social graph" understand how people are connected. “Knowledge Graph" is Google’s term for how it is building relationships between different people, places and things and report facts about these entities.Big Change, Subtle AppearanceEarlier this year, the Wall Street Journal wrote about the coming change. At the time, I felt what was described seemed more an extension of things Google had already been doing rather than a dramatic shift. Now having seen it first-hand, I stand corrected. The WSJ had it right. This is indeed a big change in line with other major launches like Search Plus Your World last January and Universal Search in 2007.Big change, but I don’t think it’ll be a shocking change to most Google users who will begin seeing it over the coming days on Google.com, if they’re searching in US English.Google will still look largely the same as it does now. Knowledge Graph information flows into new units — they have no official name (and I did ask), so I’ll call them “knowledge panels." These panels appear to the right of Google’s regular results, rather than disrupt those familiar links:Knowledge panels don’t always appear, only showing up only when Google deems them relevant. But when Google does think they’re relevant, they’re a pretty cool search exploration tool. When the head of Google Search, Amit Singhal, let me play with the new system following his keynote talk at our SMX London show yesterday, I couldn’t help but think of it like a form of StumbleUpon or channel surfing for search.Fact SurfingA search for Star Trek brought up a panel that included a reference to Star Trek: Voyager, my favorite of all the series. Jumping to explore that, the Voyager box included a reference to Babylon 5, another favorite sci-fi show of mine. Jumping to that box, there was a reference to Claudia Christian, who wonderfully played one of the main characters in Babylon 5, Susan Ivanova. I surfed over for a look.If you’ve ever started reading a Wikipedia page and then gotten lost jumping from one topic to another, that’s the experience I think many are about to discover with Google. You’ll not only discover answers to factual questions, but you’ll likely quickly explore more than you had planned and have fun doing it.3.5 Billion Facts About 500 Million ObjectsGoogle says it has compiled over 3.5 billion facts, which include information about and relationships between 500 million objects or “entities," as it sometimes calls them. In general, entities are persons, places and things. You know, nouns.In particular, these are just some of the categories of objects Google has facts about:Actors, Directors, MoviesArt Works & MuseumsCities & CountriesIslands, Lakes, LighthousesMusic Albums & Music GroupsPlanets & SpacecraftRoller Coasters & SkyscrapersSports TeamsAgain, those are just some of the categories. The relationships are also as important as the facts. The relationships allow the Knowledge Graph to know which actors to list for a particular movie or which spacecraft have visited a planet.The Most Popular FactsHow do you keep from getting overwhelmed with useless facts? Google picks out the facts for each object that are most sought in relation to that object.“We are showing all the things that people look for in a given query," Singhal told me.Consider these two knowledge panels, one for Simpson’s creator Matt Groening, the other for architect Frank Lloyd Wright (you can click to enlarge):For both, you’re told when they were born and where they were educated. After that, the remaining facts shown differ.Only Groening has facts about his parents and siblings listed. Why? Look closely at the names: Margaret (Marge), Homer, Lisa. Groening named characters after his own family. Looking at searches related to Groening, Google can tell these are commonly sought answers.For Groening, the books he’s authored are listed. For Wright, his famous buildings are. That makes sense. People are far more interested in structures by Wright than by books by him. Indeed, Google’s autocomplete suggestions — which are based on the most popular terms related to a core search topic — reflect this:I found it fascinating to see what was shown, as I ran through various classes of searches. For Disneyland, popular rides were shown. For a ride like Space Mountain, the duration was shown (really, only 3 minutes?). For an astronaut, I was shown the missions and overall time they’d spent in space (how cool to have that as a fact about yourself). For Buckingham Palace, the size of floor space was listed. For Larry Page and Mark Zuckerberg, their estimated net worth was shown.Each knowledge panel has a “People also search for" area at the bottom which lists related people, places or things. Again, the relationships are determined by looking at search data. People who search for Groening, for example, often search for David X. Cohen, who co-created Futurama with Groening.For search marketers, or anyone interested in how people search, these panels have become another great discovery resource along with keyword research tools like Google Trends, Google Insights, Google Correlate or the AdWords Keyword Tool.Facts But Not ActionsOne thing I found lacking was that the knowledge panels I saw often lacked links to let people take actions related to these objects. For example, one of the popular things people want in relation to Buckingham Palace is to book tickets for tours. However, the panel had no options for this.In contrast, the new “Snapshots" announced (but still about a week from going live) as part of Bing’s relaunch last week are heavy on trying to help people do things like book tickets or reservations.Why not have actions?“We will, of course, explore that, but right now, we just want to take it out and see how it works," Singhal said.Occasionally you can take actions via the links to some of the source providers of facts, as with some music searches that might credit Songkick or StubHub.Which Andromeda Did You Mean?For some searches, there may be more than one entity that Google has facts for related to a search. In these cases, rather than make the wrong guess, Google will put up a “See results box" as shown below for Andromeda:Andromeda could mean, in Google’s Knowledge Graph, the galaxy, the TV show or the Swedish band. This box, also known as a disambiguation box, allows people to make the right choice.Where Do The Facts Come From?How does Google know any of these facts? Google Squared was an initial attempt in 2009 to extract facts from the web. Google still has that technology, but the service was never that impressive on accuracy and closed as standalone site last year.Rather, it was Google’s purchase of Metaweb in 2010 that really jump-started the Knowledge Graph. Metaweb was building both the relationships and, though Freebase, a database of facts.Since that time, Singhal said Google’s massively grown the fact database. Contributions happen with Freebase, but data also comes from publicly-available sources like Wikipedia and The CIA World Factbook and even information out of Google Books. Beyond that, Google also licenses data from others.“Wherever we can get our hands on structured data, we add it," Singhal said.Fixing Bad DataDrawing from Wikipedia and other public sources means that there’s no guarantee that the facts are accurate. That’s why the knowledge panels on Google all have a “Report a problem" link at the bottom.If you click on that, you can then indicate if any particular fact is incorrect. Singhal said that Google will use a combination of computer algorithms and human review to decide if a particular fact should be correctedIf Google makes a change, the source provider is told. This mean, in particular, Wikipedia will be informed of any errors. It doesn’t have to change anything, but apparently the service is looking forward to the feedback.“They really are excited about it. They get to get feedback from a much bigger group of people," Singhal said.Will Publisher Traffic Drop?Search engines have increasingly moved toward showing direct answers in their results over the years. Such efforts have worried some publishers, leaving them wondering if they’ll be left out of receiving search traffic. After all, if search engines provide answers right within their results, why would anyone click away?Google’s Knowledge Graph is going to massively increase the number of direct answers shown, which will almost certainly renew concerns.Singhal’s response is that publishers shouldn’t worry. He said that most of these types of queries, Google has found, don’t take traffic away from most sites. Part of this seems to be that the boxes encourage more searching, which in turn still eventually takes people to external sites.Still, some are going to lose out, he admits. But he sees that as something that was going to happen inevitably, anyway, using a “2+2" metaphor. If people are searching for 2+2, why shouldn’t Google give a direct answer to that versus sending searchers to a site? By the way, Google does do math like this already and has for years.Below, you can hear Singhal talk more about this when asked by a member of the audience at SMX London yesterday:My concern is what happens if publishers have compiled great information that someone at Wikipedia or Freebase harvests into a database. For example, if a Disneyland fan site has organized a list of ride durations by doing original legwork, what credit do they get if that data is used? Facts can’t be trademarked, at least in the US, so anyone can help themselves assuming they don’t duplicate the exact format or presentation.Google does list credit links to places like Wikipedia. In turn, Wikipedia does give credit (albeit in a way that doesn’t help search rankings) to the sources it draws from. But that puts actual source material two clicks away from the searcher, assuming the searcher wants to go beyond the fact they already received.This is one that has to be watched closely. As I wrote before, it seems likely the Knowledge Graph will impact a relatively small set of sites that focus on facts, sites that already likely exposing answers in their listing descriptions and so not getting traffic anyway. But we’ll see.It’s also important to remember that the “main" results aren’t disappearing. Consider again the Frank Lloyd Wright search, this time with the knowledge panel in context with the regular results:As you can see, links to sites outside of Google remain to the left and in the most viewed area of a search results page.Being IncludedWhat if you want to be part of the new knowledge panels and Knowledge Graph in general? Singhal said that at the moment, there’s no mechanism designed for sites to do this. IE, if you run a site about Frank Lloyd Wright, there’s no way to be associated as some type of suggested source for the Frank Lloyd Wright panel.Potentially, you could head over Freebase, open an account and contribute. Of course, I’m pretty sure adding your blog to a horrible list of blogs like this isn’t going to help. Maybe other categories might be more successful, but I’d hold off, for the moment.Tagging parts of your pages with commonly used schema might be helpful, though I wouldn’t do this solely in hopes of getting your facts into the Knowledge Graph. The articles below have more about using schema:Ads, Mobile & Tablet FormatsAnyone familiar with Google’s ads will immediately wonder what happens when the panel shows.Singhal said that if there are also ads along with a knowledge panel for any search, the ads will still display. Google also has different formats for when a query has a few, many or no ads. I haven’t seen these, but I’ll try to update as they become visible after the launch.In addition, Google also uses special formats to make the panels work well on tablet and mobile devices, he said. They aren’t restricted to just desktop search, so that’s good news for those of you who want an easier time to cheat at pub and bar quiz nights.Sadly, there’s no way to just search the Knowledge Graph directly. It only appears with regular Google Search.The CompetitionGoogle’s not alone in having a knowledge graph, of course. Wolfram Alpha, launched in 2009, has continued to refine its service. It got a big boost being picked as a search partner by Apple to help power Siri (even though that recently embarrassed Apple on a particular search about smart phones).As for Bing, it has a partnership with Wolfram Alpha plus owns Powerset technology that, somewhat similar to the Knowledge Graph, tries to deeply understand the meanings of words, rather than just really match patterns of letters.But Bing hasn’t really seemed to capitalize on either its Wolfram partnership nor Powerset. Really, the Knowledge Graph seems to be going more head-to-head with Wolfram Alpha. Does it?“Wolfram is far more computational," Singhal said, explaining that Wolfram Alpha’s goal seems to be finding ways that you can effectively use facts in computations.For example, you can enter cars in california / california population into Wolfram Alpha to have it take those two facts and come up with an average (about 1 car for every two people, by the way, using 2009 data).Google’s not trying to perform these types of calculations. The focus is instead on providing popular facts.The FutureThe big picture, of course, is that some day the Knowledge Graph won’t just be used for facts. Instead, if Google can better tag actual web pages to entities, then it can better understand what those pages are about and related to, which might increase the relevancy of its regular results.That’s down the line, as are many other changes to the knowledge panel themselves. Today represents only a start.“This is just a baby step, in my view, to expose this to our users," Singhal said.To learn more about the Google Knowledge Graph, see coverage from others across the web organized here on Techmeme, the official Google blog post, plus the official video, below:

Picture 1

Google Launches Knowledge Graph To Provide Answers, Not Just Links

Google Launches Knowledge Graph To Provide Answers, Not Just Links Danny Sullivan on May 16, 2012 at 1:00 pm Hinted at for months, Google formally launched its “Knowledge Graph" today. The new technology is being used to provide popular facts about people, places and things alongside Google’s traditional results. It also allows Google to move toward a new way of searching not for pages that match query terms but for “entities" or concepts that the words describe.Knowledge Graph? “Graph" is a technical term used to describe how a set of objects are connected. Google has used a “link graph" to model how pages link to each other, in order to help determine which are popular and relevant for particular searches. Facebook has used a “social graph" understand how people are connected. “Knowledge Graph" is Google’s term for how it is building relationships between different people, places and things and report facts about these entities.Big Change, Subtle AppearanceEarlier this year, the Wall Street Journal wrote about the coming change. At the time, I felt what was described seemed more an extension of things Google had already been doing rather than a dramatic shift. Now having seen it first-hand, I stand corrected. The WSJ had it right. This is indeed a big change in line with other major launches like Search Plus Your World last January and Universal Search in 2007.Big change, but I don’t think it’ll be a shocking change to most Google users who will begin seeing it over the coming days on Google.com, if they’re searching in US English.Google will still look largely the same as it does now. Knowledge Graph information flows into new units — they have no official name (and I did ask), so I’ll call them “knowledge panels." These panels appear to the right of Google’s regular results, rather than disrupt those familiar links:Knowledge panels don’t always appear, only showing up only when Google deems them relevant. But when Google does think they’re relevant, they’re a pretty cool search exploration tool. When the head of Google Search, Amit Singhal, let me play with the new system following his keynote talk at our SMX London show yesterday, I couldn’t help but think of it like a form of StumbleUpon or channel surfing for search.Fact SurfingA search for Star Trek brought up a panel that included a reference to Star Trek: Voyager, my favorite of all the series. Jumping to explore that, the Voyager box included a reference to Babylon 5, another favorite sci-fi show of mine. Jumping to that box, there was a reference to Claudia Christian, who wonderfully played one of the main characters in Babylon 5, Susan Ivanova. I surfed over for a look.If you’ve ever started reading a Wikipedia page and then gotten lost jumping from one topic to another, that’s the experience I think many are about to discover with Google. You’ll not only discover answers to factual questions, but you’ll likely quickly explore more than you had planned and have fun doing it.3.5 Billion Facts About 500 Million ObjectsGoogle says it has compiled over 3.5 billion facts, which include information about and relationships between 500 million objects or “entities," as it sometimes calls them. In general, entities are persons, places and things. You know, nouns.In particular, these are just some of the categories of objects Google has facts about:Actors, Directors, MoviesArt Works & MuseumsCities & CountriesIslands, Lakes, LighthousesMusic Albums & Music GroupsPlanets & SpacecraftRoller Coasters & SkyscrapersSports TeamsAgain, those are just some of the categories. The relationships are also as important as the facts. The relationships allow the Knowledge Graph to know which actors to list for a particular movie or which spacecraft have visited a planet.The Most Popular FactsHow do you keep from getting overwhelmed with useless facts? Google picks out the facts for each object that are most sought in relation to that object.“We are showing all the things that people look for in a given query," Singhal told me.Consider these two knowledge panels, one for Simpson’s creator Matt Groening, the other for architect Frank Lloyd Wright (you can click to enlarge):For both, you’re told when they were born and where they were educated. After that, the remaining facts shown differ.Only Groening has facts about his parents and siblings listed. Why? Look closely at the names: Margaret (Marge), Homer, Lisa. Groening named characters after his own family. Looking at searches related to Groening, Google can tell these are commonly sought answers.For Groening, the books he’s authored are listed. For Wright, his famous buildings are. That makes sense. People are far more interested in structures by Wright than by books by him. Indeed, Google’s autocomplete suggestions — which are based on the most popular terms related to a core search topic — reflect this:I found it fascinating to see what was shown, as I ran through various classes of searches. For Disneyland, popular rides were shown. For a ride like Space Mountain, the duration was shown (really, only 3 minutes?). For an astronaut, I was shown the missions and overall time they’d spent in space (how cool to have that as a fact about yourself). For Buckingham Palace, the size of floor space was listed. For Larry Page and Mark Zuckerberg, their estimated net worth was shown.Each knowledge panel has a “People also search for" area at the bottom which lists related people, places or things. Again, the relationships are determined by looking at search data. People who search for Groening, for example, often search for David X. Cohen, who co-created Futurama with Groening.For search marketers, or anyone interested in how people search, these panels have become another great discovery resource along with keyword research tools like Google Trends, Google Insights, Google Correlate or the AdWords Keyword Tool.Facts But Not ActionsOne thing I found lacking was that the knowledge panels I saw often lacked links to let people take actions related to these objects. For example, one of the popular things people want in relation to Buckingham Palace is to book tickets for tours. However, the panel had no options for this.In contrast, the new “Snapshots" announced (but still about a week from going live) as part of Bing’s relaunch last week are heavy on trying to help people do things like book tickets or reservations.Why not have actions?“We will, of course, explore that, but right now, we just want to take it out and see how it works," Singhal said.Occasionally you can take actions via the links to some of the source providers of facts, as with some music searches that might credit Songkick or StubHub.Which Andromeda Did You Mean?For some searches, there may be more than one entity that Google has facts for related to a search. In these cases, rather than make the wrong guess, Google will put up a “See results box" as shown below for Andromeda:Andromeda could mean, in Google’s Knowledge Graph, the galaxy, the TV show or the Swedish band. This box, also known as a disambiguation box, allows people to make the right choice.Where Do The Facts Come From?How does Google know any of these facts? Google Squared was an initial attempt in 2009 to extract facts from the web. Google still has that technology, but the service was never that impressive on accuracy and closed as standalone site last year.Rather, it was Google’s purchase of Metaweb in 2010 that really jump-started the Knowledge Graph. Metaweb was building both the relationships and, though Freebase, a database of facts.Since that time, Singhal said Google’s massively grown the fact database. Contributions happen with Freebase, but data also comes from publicly-available sources like Wikipedia and The CIA World Factbook and even information out of Google Books. Beyond that, Google also licenses data from others.“Wherever we can get our hands on structured data, we add it," Singhal said.Fixing Bad DataDrawing from Wikipedia and other public sources means that there’s no guarantee that the facts are accurate. That’s why the knowledge panels on Google all have a “Report a problem" link at the bottom.If you click on that, you can then indicate if any particular fact is incorrect. Singhal said that Google will use a combination of computer algorithms and human review to decide if a particular fact should be correctedIf Google makes a change, the source provider is told. This mean, in particular, Wikipedia will be informed of any errors. It doesn’t have to change anything, but apparently the service is looking forward to the feedback.“They really are excited about it. They get to get feedback from a much bigger group of people," Singhal said.Will Publisher Traffic Drop?Search engines have increasingly moved toward showing direct answers in their results over the years. Such efforts have worried some publishers, leaving them wondering if they’ll be left out of receiving search traffic. After all, if search engines provide answers right within their results, why would anyone click away?Google’s Knowledge Graph is going to massively increase the number of direct answers shown, which will almost certainly renew concerns.Singhal’s response is that publishers shouldn’t worry. He said that most of these types of queries, Google has found, don’t take traffic away from most sites. Part of this seems to be that the boxes encourage more searching, which in turn still eventually takes people to external sites.Still, some are going to lose out, he admits. But he sees that as something that was going to happen inevitably, anyway, using a “2+2" metaphor. If people are searching for 2+2, why shouldn’t Google give a direct answer to that versus sending searchers to a site? By the way, Google does do math like this already and has for years.Below, you can hear Singhal talk more about this when asked by a member of the audience at SMX London yesterday:My concern is what happens if publishers have compiled great information that someone at Wikipedia or Freebase harvests into a database. For example, if a Disneyland fan site has organized a list of ride durations by doing original legwork, what credit do they get if that data is used? Facts can’t be trademarked, at least in the US, so anyone can help themselves assuming they don’t duplicate the exact format or presentation.Google does list credit links to places like Wikipedia. In turn, Wikipedia does give credit (albeit in a way that doesn’t help search rankings) to the sources it draws from. But that puts actual source material two clicks away from the searcher, assuming the searcher wants to go beyond the fact they already received.This is one that has to be watched closely. As I wrote before, it seems likely the Knowledge Graph will impact a relatively small set of sites that focus on facts, sites that already likely exposing answers in their listing descriptions and so not getting traffic anyway. But we’ll see.It’s also important to remember that the “main" results aren’t disappearing. Consider again the Frank Lloyd Wright search, this time with the knowledge panel in context with the regular results:As you can see, links to sites outside of Google remain to the left and in the most viewed area of a search results page.Being IncludedWhat if you want to be part of the new knowledge panels and Knowledge Graph in general? Singhal said that at the moment, there’s no mechanism designed for sites to do this. IE, if you run a site about Frank Lloyd Wright, there’s no way to be associated as some type of suggested source for the Frank Lloyd Wright panel.Potentially, you could head over Freebase, open an account and contribute. Of course, I’m pretty sure adding your blog to a horrible list of blogs like this isn’t going to help. Maybe other categories might be more successful, but I’d hold off, for the moment.Tagging parts of your pages with commonly used schema might be helpful, though I wouldn’t do this solely in hopes of getting your facts into the Knowledge Graph. The articles below have more about using schema:Ads, Mobile & Tablet FormatsAnyone familiar with Google’s ads will immediately wonder what happens when the panel shows.Singhal said that if there are also ads along with a knowledge panel for any search, the ads will still display. Google also has different formats for when a query has a few, many or no ads. I haven’t seen these, but I’ll try to update as they become visible after the launch.In addition, Google also uses special formats to make the panels work well on tablet and mobile devices, he said. They aren’t restricted to just desktop search, so that’s good news for those of you who want an easier time to cheat at pub and bar quiz nights.Sadly, there’s no way to just search the Knowledge Graph directly. It only appears with regular Google Search.The CompetitionGoogle’s not alone in having a knowledge graph, of course. Wolfram Alpha, launched in 2009, has continued to refine its service. It got a big boost being picked as a search partner by Apple to help power Siri (even though that recently embarrassed Apple on a particular search about smart phones).As for Bing, it has a partnership with Wolfram Alpha plus owns Powerset technology that, somewhat similar to the Knowledge Graph, tries to deeply understand the meanings of words, rather than just really match patterns of letters.But Bing hasn’t really seemed to capitalize on either its Wolfram partnership nor Powerset. Really, the Knowledge Graph seems to be going more head-to-head with Wolfram Alpha. Does it?“Wolfram is far more computational," Singhal said, explaining that Wolfram Alpha’s goal seems to be finding ways that you can effectively use facts in computations.For example, you can enter cars in california / california population into Wolfram Alpha to have it take those two facts and come up with an average (about 1 car for every two people, by the way, using 2009 data).Google’s not trying to perform these types of calculations. The focus is instead on providing popular facts.The FutureThe big picture, of course, is that some day the Knowledge Graph won’t just be used for facts. Instead, if Google can better tag actual web pages to entities, then it can better understand what those pages are about and related to, which might increase the relevancy of its regular results.That’s down the line, as are many other changes to the knowledge panel themselves. Today represents only a start.“This is just a baby step, in my view, to expose this to our users," Singhal said.To learn more about the Google Knowledge Graph, see coverage from others across the web organized here on Techmeme, the official Google blog post, plus the official video, below:

Picture 1

A new 3-D printer could finally let the technology live up to its promise

byDavid RotmanApril 25, 2017It’s less than two months before his company’s initial product launch, and CEO Ric Fulop is excitedly showing off rows of stripped-down 3-D printers, several bulky microwave furnaces, and assorted small metal objects on a table for display. Behind a closed door, a team of industrial designers sit around a shared work desk, each facing a large screen. The wall behind them is papered with various possible looks for the startup’s ambitious products: 3-D printers that can fabricate metal parts cheaply and quickly enough to make the technology practical for widespread use in product design and manufacturing.The company, Desktop Metal, has raised nearly $100 million from leading venture capital firms and the venture units of such companies as General Electric, BMW, and Alphabet. The founders include four prominent MIT professors, including the head of the school’s department of materials science and Emanuel Sachs, who filed one of the original patents on 3-D printing in 1989. Still, despite all the money and expertise, there’s no guarantee the company will succeed in its goal of reinventing how we make metal parts—and thus transforming much of manufacturing.As Fulop moves about the large, open workspace, his excitement and enthusiasm seem tempered by anxiety. The final commercial printers are not yet ready. Employees are busy tinkering with the machines, and fabricated test objects are scattered about. Progress is being made, but it’s also obvious that the clock is ticking. In a corner near the front door and entrance area, the floor is empty and taped off; soon the space needs to be filled with a mockup of the company’s planned booth for an upcoming trade show.If it succeeds, Desktop Metal will help solve a daunting challenge that has eluded developers of 3-D printing for more than three decades, severely limiting the technology’s impact. Indeed, despite considerable fanfare and evangelical enthusiasts, 3-D printing has, in many ways, been a disappointment.Hobbyists and self-proclaimed makers can use relatively inexpensive 3-D printers to make wonderfully complex and ingenious shapes out of plastics. And some designers and engineers have found those machines useful in mocking up potential products, but printing polymer parts has found little use on the production floor in anything but a few specialized products, such as customized hearing aids and dental implants.Though it is possible to 3-D-print metals, doing so is difficult and pricey. Advanced manufacturing companies such as GE are using very expensive machines with specialized high-power lasers to make a few high-value parts (see “Additive Manufacturing" in our 10 Breakthrough Technologies list of 2013). But printing metals is limited to companies with millions to spend on the equipment, facilities to power the lasers, and highly trained technicians to run it all. And there is still no readily available option for those who want to print various iterations of a metal part during the process of product design and development.A hydraulic manifold is processed inside a microwave furnace, which uses temperatures up to 1,400 °C to sinter the steel part. Such a part is too complex to make with conventional methods.The shortcomings of 3-D printing mean the vision that has long excited its advocates remains elusive. They would like to create a digital design, print out prototypes that they could test and refine, and then use the digital file of the optimized version to create a commercial product or part out of the same material whenever they hit “make" on a 3-D printer. Having an affordable and fast way to print metal parts would be an important step in making this vision a reality.It would give designers more freedom, allowing them to create and test parts and devices with complex shapes that can’t be made easily with any other production method—say, an intricate aluminum lattice or a metal object with internal cavities. It could eventually enable engineers and materials scientists to create parts with new functions and properties by depositing various combinations of materials—for example, printing out a magnetic metal next to a nonmagnetic one. Beyond that, it would redefine the economics of mass production, because the cost of printing something would be the same regardless of how many items were produced. That would change how manufacturers think about the size of factories, the need for backup inventory (why keep many parts in stock if you can simply and quickly print one out?), and the process of tailoring manufacturing to specialized products.This is why there has been a race to turn 3-D printing into a new way to produce parts. Longtime suppliers of 3-D printers, including Stratasys and 3D Systems, are introducing increasingly advanced machines that are fast enough for manufacturers to use. Last year, HP introduced a line of 3-D printers that the company says will allow manufacturers to prototype and make products with nylon, a widely used thermoplastic. And last fall, GE spent over a billion dollars on a pair of European companies specializing in 3-D-printing of metal parts.This steel propeller has just been printed. Between the propeller’s blades and the metal support is a thin line of ceramic, which will turn to sand during the sintering process, allowing the finished part to be easily separated from the support.The propeller after processing provides an example of a high-performance part that can be made with 3-D printing. Engineers can use the method to prototype and optimize different designs.But the real competition for Desktop Metal is probably not from the growing number of companies in 3-D printing. For one thing, the 3-D printers from HP, Stratasys (an investor in Desktop Metal), and 3D Systems mainly use various types of plastics, not the range of metals Fulop’s company wants to use in its printers. And GE’s high-end machines overlap little with Desktop Metal’s market ambitions. Instead, the real competitors for Desktop Metal are more likely to be established metal-processing technologies. Those include automated machining techniques—such as the method used to make the ultra-thin aluminum back casing of iPhones—and a rapidly growing practice called metal injection molding, a common way to mass-produce metal products.Key Players in 3-D printingCompany: StratasysTechnology: One of the original 3-D-printing companies, Stratasys was founded by Scott Crumb, the inventor of fused deposition modeling, the most common way to print plastic parts.Products: Sells machines that can print a variety of photopolymer and thermoplastic materials.Company: CarbonTechnology:This Silicon Valley startup has developed a novel photochemical process for fabricating parts out of various plastics, including polyurethane and epoxy.Products: Introduced a modular system for manufacturers this spring.Company: HPTechnology: Its line of machines exploits the company’s long history with ink-jet printing through what it calls “multi jet fusion technology." This uses multiple nozzles for high-speed and high-resolution printing.Products: Introduced its first 3-D printers last year. The initial machines print nylon, but the company is looking to expand to other materials.Company: 3D SystemsTechnology: The first 3-D-printing company, 3D Systems was founded by Chuck Hull, the inventor of stereolithography, which uses light to form parts out of photopolymers. It now offers various types of 3-D printers, including some that print metal parts.Products: Introduced the latest iteration of stereolithography last year.In other words, rather than merely trying to outdo other 3-D printers, Desktop Metal will have the tough task of converting manufacturers away from production methods that are at the heart of their businesses. But the very existence of this large, established market is what makes the prospect so intriguing. Making metal parts, says Fulop, “is a trillion-dollar industry." And even if 3-D printing wins only a small portion of it, he adds, it could still represent a multibillion-dollar opportunity.Too hot to printLook around. Metals are everywhere. But whereas 3-D printing has been widely used in making plastics, the technology’s use in making metal parts “has been narrowly confined," says Chris Schuh, head of materials science and engineering at MIT and cofounder of Desktop Metal. “Metal processing is more of an art. It’s a very challenging space."Making metal objects using 3-D printing is difficult for several reasons. Most obvious is the high temperature required for processing metals. The most common way to print plastics involves heating polymers and squirting the material out the printer nozzle; the plastic then quickly hardens into the desired shape. The process is simple enough to be used in 3-D printers that sell for around $1,000. But building a 3-D printer that directly extrudes metals is not practical, given that aluminum melts at 660 °C, high-carbon steel at 1,370 °C, and titanium at 1,668 °C. Metal parts also have to go through several high-temperature processes to ensure the expected strength and other mechanical properties.To make a 3-D printer fast enough to be used in manufacturing metal objects, Desktop Metal turned to a technology that dates back to the late 1980s. That’s when a team of MIT engineers led by company cofounder Sachs filed a patent for “three-dimensional printing techniques." It described a process of putting down a thin layer of metal powder and then using ink-jet printing to deposit a liquid that selectively binds the powder together. The process, which is repeated for hundreds or thousands of layers to define a metal part, can make ones with nearly unlimited geometric complexity. In the most common application of the technology, the binder acts like a glue. However, it can also be used to locally deposit different materials in different locations.The MIT researchers knew their printing method could be used to make metal and ceramic parts, says Sachs. But they also knew it was too slow to be practical, and the metal powders required for the process were far too expensive at the time. Sachs turned to other research interests, including an effort to improve the manufacturing of photovoltaics (see “Praying for an Energy Miracle,"). In the next decades 3-D printing took off and captured the imagination of many product designers. Most famously, a cheap and easy-to-use 3-D printer from MakerBot was introduced in 2009, appealing to many self-styled inventors and tinkerers. But these affordable printers bumped up against the reality that they were limited to using a few cheap plastics. What’s more, though the machines can print complex shapes, the final product often isn’t as good as a plastic part made with conventional technology.Close up of wing nut.Desktop Metal printed the bolt and wing nut separately to demonstrate that it can fabricate parts with tight tolerances.Meanwhile, researchers at industrial manufacturers like GE were busy advancing laser-based technologies invented in the late 1980s for printing metals. These machines use lasers—or, in some cases, high-power electron beams—to draw shapes in a layer of metal powder by melting the material. They repeat the process to build up a three-dimensional object out of the fused powders. The technique is impressive in its capabilities, but it’s slow and expensive. It is worthwhile only for extremely high-value parts that are too complex to make using other methods. Notably, GE’s new jet engine uses a series of sophisticated 3-D-printed fuel nozzles; they are lighter and far more durable because intricate cooling channels have been built into them.The founders of Desktop Metal decided that to make 3-D metal printing more widely accessible, they would need to sell two different types of machines: a relatively inexpensive “desktop" model suitable for designers and engineers fabricating prototypes, and one that is fast and large enough for manufacturers. Luckily, several innovations have finally made Sachs’s original invention practical for mass production, including the development of very high-speed ink-jet printing for depositing the binder. Successively printing about 1,500 layers, each 50 micrometers thick and deposited in a few seconds, the production-scale printer can build up a 500-cubic-inch part in an hour. That’s about 100 times faster than a laser-based 3-D printer can make metal parts.For its prototyping machine, Desktop Metal adopted a method from plastic-based 3-D printing. But instead of a softened polymer, it uses metal powders mixed with a flowable polymer binder. The formulation is extruded, using the printed binder to clump the metal powder into the intended shapes.However, whether the part is printed with the prototyping machine or the production model, the resulting object—part plastic binder and part metal—lacks the strength of a metal one. So it goes into a specially designed microwave oven for sintering, a process of using heat to make the material more dense, producing a part with the desired properties. In a series of carefully calibrated steps during the sintering process, the polymer is burned off, and then the metal is fused together at a temperature well below its melting point.The sales pitchAccording to the promises of its enthusiasts, 3-D printing will reduce the need for industrial manufacturers and empower local artisan producers (see “The Difference Between Makers and Manufacturers,"). The reality is likely to be far different but nonetheless profound. Many sectors of industrial production increasingly use automation and advanced software, and 3-D printing enhances this ongoing move to digital manufacturing. In some ways, it is not unlike an automated machining process that works off a digital file to create a metal part. What’s different about 3-D printing is that it offers ways to make far more complex objects and removes many of the constraints that the production process puts on designers and engineers.Despite the allure of apps and social media, today’s digital technologies are doing little to generate the kind of prosperity that previous generations enjoyed, a prominent economist argues. But that doesn’t mean we should give up on innovation.It could also inspire manufacturers to change their logistics and production strategies. For relatively small quantities of goods, 3-D printing could be cheaper, since it eliminates the costs associated with the tooling, casting, and molds required to churn out most metal and plastic objects. The time and money needed to set all that up is one reason why mass production is often required if a manufacturer is going to make money. Without that incentive to commit to mass-scale production, factories could shift production schedules and be more responsive to demand, moving even closer to just-in-time manufacturing. John Hart, a professor of mechanical engineering at MIT and cofounder of Desktop Metal, calls it customized mass production. Rather than having large facilities make a huge number of identical parts that have to be shipped across the world and warehoused, manufacturers might maintain scattered factories that make a diverse set of products, ramping up production as needed. “The implications in a decade or two are probably beyond our imagination," Hart says. “I don’t really think we know what we will do with these technologies."For now, the challenge for Desktop Metal is to get its equipment in the hands of designers and engineers who are responsible for their companies’ next generation of products. This winter Fulop was preparing to showcase the company’s initial product, the prototyping machine, at a trade show in Pittsburgh in early May. (The production 3-D printer is scheduled to be available next year.) His task would be to convince attendees that spending $120,000 on Desktop Metal’s prototyping printer and sintering furnace is essential for the future of their companies.   One of the key advantages of 3-D printing is its ability to make complex structures, including internal lattices in a metal part. Such structures could be used to make ­lighter and stronger parts.It is a sales job that Fulop is well suited for. He has started more than a half-dozen companies, beginning with one that imported computer hardware and software that he founded when he was 16 and still living in his native Venezuela. He is probably best known for founding A123 Systems, a battery company that was one of the highest-flying startups in the late 2000s, culminating with a $371 million IPO in 2009. The company was based on a novel lithium-ion technology developed by Yet-Ming Chiang, an MIT professor who is also a cofounder of Desktop Metal. Like their current 3-D-printing startup, A123 hoped to apply materials science expertise to revolutionize a huge market.It could also inspire manufacturers to change their logistics and production strategies.Though A123 enjoyed rapid growth and a highly successful IPO, the company declared bankruptcy in 2012 (Fulop left in 2010). Ask Fulop the lesson from A123 and he says simply: “Batteries are a low-margin market." Indeed, A123 struggled to compete in an increasingly crowded battery business, and it didn’t offer a radical enough performance improvement over established lithium-ion batteries to immediately win over a fledging hybrid-vehicles market (see “A123’s Technology Just Wasn’t Good Enough").The challenges faced by Desktop Metal will be very different. A huge market for metal parts already exists. And the startup believes its technology will, at least in the short run, have few direct competitors. Chiang points to the startup’s “really rich" patent portfolio. “It’s not just the materials; it’s the techniques, it’s the [sintering] furnace," he says. “The harder the technology is, the higher the barrier to entry you build if you’re successful."In his office, Chiang has a wooden box containing a half-dozen swords, on loan from the Museum of Fine Arts in Boston, that were made in the 1970s using traditional Japanese techniques. Chiang uses the swords in teaching. The lesson: how the craftsmen used the secrets of metallurgy to turn iron ore into the final product—an ultra-sharp, slightly curved steel sword. Showing off the swords, Chiang points to some of their details, explaining the tricks their makers used, such as the quenching method used to create an extremely hard edge and a softer body. Back at his desk, his attention again on Desktop Metal, he’s equally enthusiastic as he describes the metal objects recently printed by the company and on display at its facilities. What’s exciting is “the idea that you can really make these parts," Chiang says. “A few hours, and here’s a part that you couldn’t even make before."It won’t replace such century-old production techniques as forging and metal casting, but 3-D printing could create new possibilities in manufacturing—and, just maybe, reimagine the art of metallurgy.A hydraulic manifold is processed inside a microwave furnace, which uses temperatures up to 1,400 °C to sinter the steel part. Such a part is too complex to make with conventional methods.This steel propeller has just been printed. Between the propeller’s blades and the metal support is a thin line of ceramic, which will turn to sand during the sintering process, allowing the finished part to be easily separated from the support.The propeller after processing provides an example of a high-performance part that can be made with 3-D printing. Engineers can use the method to prototype and optimize different designs.Desktop Metal printed the bolt and wing nut separately to demonstrate that it can fabricate parts with tight tolerances.One of the key advantages of 3-D printing is its ability to make complex structures, including internal lattices in a metal part. Such structures could be used to make ­lighter and stronger parts.David Rotman EditorAs the editor of MIT Technology Review, I spend much of my time thinking about the types of stories and journalism that will be most valuable to our readers. What do curious, well-informed readers need to know about emerging technologies? As a… More writer, I am particularly interested these days in the intersection of chemistry, materials science, energy, manufacturing, and economics.

Picture 1

DIY CRISPR Kits, Learn Modern Science By Doing

UpdateI have had a lot of questions about whether anyone can use these kits or how much knowledge or experience or equipment is required. I want to say that everyone will be able to use these kits(they contain everything you need, no extra equipment is required), even if you have had zero experience with Biotechnology(there will be extensive written protocols and videos available). I believe that the only way that this works is if Science is democratized so everyone has access.The ODIN(http://the-odin.com ) was started on the premise that if the Scientific population of the world doubled or tripled, it would change everything-Therapeutics and medicine - materials technology-Fuel and food-What if we could compost synthetic plastics?Synthetic Biology is not illegal So what is stopping this from happening? Until now, no one has taken the time to develop protocols and methods and then be willing to provide all of this for a reasonable price that can be afforded without large institutional grants.Clustered Regularly Interspaced Short Palindromic Repeats(CRISPR) is just a long name to say that Scientist found a protein(Cas9) that can use an RNA guide to make highly specific cuts in DNA. This allows unprecedented abilities to edit and engineer DNA. The reason it is such a great Synthetic Biology tool is the specificity and general applicability. One can target almost any DNA region in almost any organism and the time to do this is an order of magnitude less than before with other genetic engineering techniques.What if you could learn about new cutting edge techniques like CRISPR by actually performing experiments using them!*Note to BioHackers: Each Kit comes with all sequence and cloning detail so you can perform your own custom genome engineering.Featured In CRISPR Based Kits Bacteria are a commonly used organism in Synthetic Biology because they grow fast and have simple cellular structures, making them easy to engineer. This kit makes specific edits to genes using CRISPR allowing the bacteria to survive on special growth media when it normally would not. Everything required to perform these experiments is included in the kit.Sample CRISPR Kit Contents Include but are not limited to: A 20-200 uL professional lab grade micropipette, pipette tips, a microcentrifuge tube rack and tubes, plates and media, DNA and yeast or bacterial strains.Yeast are a commonly used organism in Synthetic Biology because they are one of the simplest Eukaryotes (their cells are similar to mammals like Humans!). Normally, yeast grow with a nice creamy white color. This kit makes specific edits to the ADE2 gene using CRISPR. This causes red pigment to accumulate and the yeast to turn red. Yeast require a more complex media to grow so this kit is more expensive. Everything required to perform these experiments is included in the kit.Synthetic Biology Non-CRISPR Based KitsLight Controlled Bacteria Kit Have you ever wanted a remote control to the genetics inside a living cell? Well now you can have that with the Light Controlled Bacteria Kit. Using this Kit you will learn about optogenetic engineering techniques and fluorescence and how to control the genetics of bacteria using blue light!Engineering Glowing Bacteria Kit What if you could engineer the genetics of a bacteria to make them glow and give off light? This kit teaches you about genetic engineering and Bioluminescence so you can create your own glowing bacteria. What would you create? My name is Josiah Zayner. I received my Ph.D. in Molecular Biophysics from the University of Chicago studying protein engineering. I have spent the past two years as a Research Fellow in NASA’s Synthetic Biology program where I work on engineering bacteria for terraforming Mars. Budget and funding issues  at NASA necessitate that I mostly work alone. I figured there has to be another way where more people can contribute to Science. For this reason, I started The ODIN about a year ago to provide resources to BioHackers who wanted to do Science at home. In my free time I have been teaching BioHacking classes and testing protocols to build these kits. Now I just need you! Without you ,Science will remain the stagnant behemoth out of everyone’s reach. With you, people will be able to contribute to solving some of the most pressing issues we face in health, medicine, food and fuel. If we work together we can Create Something Beautiful.FAQOhhemmm gee, is everyone going to kill themselves with these kits?There is nothing in these kits that is harmful to your health besides maybe the shot glasses. The Bacteria are less harmful than bacteria on your skin and the Yeast are almost identical to the one you use when cooking.What about a zombie apocalypse?Though CRISPR can work in a variety of organisms specific changes would need to be made to the DNA in order for it cross any species barrier. The only zombies you will likely see will be your friends from using our shot glasses too much. Isn’t the new season of The Walking Dead much better though?

Picture 1

The Difference Between AI, Machine Learning, and Deep Learning?

What’s the Difference Between Artificial Intelligence, Machine Learning, and Deep Learning? This is the first of a multi-part series explaining the fundamentals of deep learning by long-time tech journalist Michael Copeland.Artificial intelligence is the future. Artificial intelligence is science fiction. Artificial intelligence is already part of our everyday lives. All those statements are true, it just depends on what flavor of AI you are referring to.For example, when Google DeepMind’s AlphaGo program defeated South Korean Master Lee Se-dol in the board game Go earlier this year, the terms AI, machine learning, and deep learning were used in the media to describe how DeepMind won. And all three are part of the reason why AlphaGo trounced Lee Se-Dol. But they are not the same things.The easiest way to think of their relationship is to visualize them as concentric circles with AI — the idea that came first — the largest, then machine learning — which blossomed later, and finally deep learning — which is driving today’s AI explosion — fitting inside both.From Bust to BoomAI has been part of our imaginations and simmering in research labs since a handful of computer scientists rallied around the term at the Dartmouth Conferences in 1956 and birthed the field of AI. In the decades since, AI has alternately been heralded as the key to our civilization’s brightest future, and tossed on technology’s trash heap as a harebrained notion of over-reaching propellerheads. Frankly, until 2012, it was a bit of both.Over the past few years AI has exploded, and especially since 2015. Much of that has to do with the wide availability of GPUs that make parallel processing ever faster, cheaper, and more powerful. It also has to do with the simultaneous one-two punch of practically infinite storage and a flood of data of every stripe (that whole Big Data movement) – images, text, transactions, mapping data, you name it.Let’s walk through how computer scientists have moved from something of a bust — until 2012 — to a boom that has unleashed applications used by hundreds of millions of people every day.Artificial Intelligence — Human Intelligence Exhibited by Machines King me: computer programs that played checkers were among the earliest examples of artificial intelligence, stirring an early wave of excitement in the 1950s.Back in that summer of ’56 conference the dream of those AI pioneers was to construct complex machines — enabled by emerging computers — that possessed the same characteristics of human intelligence. This is the concept we think of as “General AI" — fabulous machines that have all our senses (maybe even more), all our reason, and think just like we do. You’ve seen these machines endlessly in movies as friend — C-3PO — and foe — The Terminator. General AI machines have remained in the movies and science fiction novels for good reason; we can’t pull it off, at least not yet.What we can do falls into the concept of “Narrow AI." Technologies that are able to perform specific tasks as well as, or better than, we humans can. Examples of narrow AI are things such as image classification on a service like Pinterest and face recognition on Facebook.Those are examples of Narrow AI in practice. These technologies exhibit some facets of human intelligence. But how? Where does that intelligence come from? That get us to the next circle, Machine Learning.Machine Learning — An Approach to Achieve Artificial Intelligence Spam free diet: machine learning helps keep your inbox (relatively) free of spam.Machine Learning at its most basic is the practice of using algorithms to parse data, learn from it, and then make a determination or prediction about something in the world. So rather than hand-coding software routines with a specific set of instructions to accomplish a particular task, the machine is “trained" using large amounts of data and algorithms that give it the ability to learn how to perform the task.Machine learning came directly from minds of the early AI crowd, and the algorithmic approaches over the years included decision tree learning, inductive logic programming. clustering, reinforcement learning, and Bayesian networks among others. As we know, none achieved the ultimate goal of General AI, and even Narrow AI was mostly out of reach with early machine learning approaches.As it turned out, one of the very best application areas for machine learning for many years was computer vision, though it still required a great deal of hand-coding to get the job done. People would go in and write hand-coded classifiers like edge detection filters so the program could identify where an object started and stopped; shape detection to determine if it had eight sides; a classifier to recognize the letters “S-T-O-P." From all those hand-coded classifiers they would develop algorithms to make sense of the image and “learn" to determine whether it was a stop sign.Good, but not mind-bendingly great. Especially on a foggy day when the sign isn’t perfectly visible, or a tree obscures part of it. There’s a reason computer vision and image detection didn’t come close to rivaling humans until very recently, it was too brittle and too prone to error.Time, and the right learning algorithms made all the difference.Deep Learning — A Technique for Implementing Machine Learning Herding cats: Picking images of cats out of YouTube videos was one of the first breakthrough demonstrations of deep learning.Another algorithmic approach from the early machine-learning crowd, Artificial Neural Networks, came and mostly went over the decades. Neural Networks are inspired by our understanding of the biology of our brains – all those interconnections between the neurons. But, unlike a biological brain where any neuron can connect to any other neuron within a certain physical distance, these artificial neural networks have discrete layers, connections, and directions of data propagation.You might, for example, take an image, chop it up into a bunch of tiles that are inputted into the first layer of the neural network. In the first layer individual neurons, then passes the data to a second layer. The second layer of neurons does its task, and so on, until the final layer and the final output is produced.Each neuron assigns a weighting to its input — how correct or incorrect it is relative to the task being performed. The final output is then determined by the total of those weightings. So think of our stop sign example. Attributes of a stop sign image are chopped up and “examined" by the neurons — its octogonal shape, its fire-engine red color, its distinctive letters, its traffic-sign size, and its motion or lack thereof. The neural network’s task is to conclude whether this is a stop sign or not. It comes up with a “probability vector," really a highly educated guess, based on the weighting. In our example the system might be 86% confident the image is a stop sign, 7% confident it’s a speed limit sign, and 5% it’s a kite stuck in a tree ,and so on — and the network architecture then tells the neural network whether it is right or not.Even this example is getting ahead of itself, because until recently neural networks were all but shunned by the AI research community. They had been around since the earliest days of AI, and had produced very little in the way of “intelligence." The problem was even the most basic neural networks were very computationally intensive, it just wasn’t a practical approach. Still, a small heretical research group led by Geoffrey Hinton at the University of Toronto kept at it, finally parallelizing the algorithms for supercomputers to run and proving the concept, but it wasn’t until GPUs were deployed in the effort that the promise was realized.If we go back again to our stop sign example, chances are very good that as the network is getting tuned or “trained" it’s coming up with wrong answers — a lot. What it needs is training. It needs to see hundreds of thousands, even millions of images, until the weightings of the neuron inputs are tuned so precisely that it gets the answer right practically every time — fog or no fog, sun or rain. It’s at that point that the neural network has taught itself what a stop sign looks like; or your mother’s face in the case of Facebook; or a cat, which is what Andrew Ng did in 2012 at Google.Ng’s breakthrough was to take these neural networks, and essentially make them huge, increase the layers and the neurons, and then run massive amounts of data through the system to train it. In Ng’s case it was images from 10 million YouTube videos. Ng put the “deep" in deep learning, which describes all the layers in these neural networks.Today, image recognition by machines trained via deep learning in some scenarios is better than humans, and that ranges from cats to identifying indicators for cancer in blood and tumors in MRI scans. Google’s AlphaGo learned the game, and trained for its Go match — it tuned its neural network — by playing against itself over and over and over.Thanks to Deep Learning, AI Has a Bright FutureDeep Learning has enabled many practical applications of Machine Learning and by extension the overall field of AI. Deep Learning breaks down tasks in ways that makes all kinds of machine assists seem possible, even likely. Driverless cars, better preventive healthcare, even better movie recommendations, are all here today or on the horizon. AI is the present and the future. With Deep Learning’s help, AI may even get to that science fiction state we’ve so long imagined. You have a C-3PO, I’ll take it. You can keep your Terminator.To learn more about where deep learning is going next, listen to our in-depth interview with NVIDIA’s own Bryan Catanzaro on the NVIDIA AI Podcast. https://blogs.nvidia.com/blog/2016/07/29/whats-difference-artificial-intelligence-machine-learning-deep-learning-ai/

Picture 1

There’s a big problem with AI: even its creators can’t explain how it works

The Dark Secret at the Heart of AI No one really knows how the most advanced algorithms do what they do. That could be a problem. Last year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey. The experimental vehicle, developed by researchers at the chip maker Nvidia, didn’t look different from other autonomous cars, but it was unlike anything demonstrated by Google, Tesla, or General Motors, and it showed the rising power of artificial intelligence. The car didn’t follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it.Getting a car to drive this way was an impressive feat. But it’s also a bit unsettling, since it isn’t completely clear how the car makes its decisions. Information from the vehicle’s sensors goes straight into a huge network of artificial neurons that process the data and then deliver the commands required to operate the steering wheel, the brakes, and other systems. The result seems to match the responses you’d expect from a human driver. But what if one day it did something unexpected—crashed into a tree, or sat at a green light? As things stand now, it might be difficult to find out why. The system is so complicated that even the engineers who designed it may struggle to isolate the reason for any single action. And you can’t ask it: there is no obvious way to design such a system so that it could always explain why it did what it did.The mysterious mind of this vehicle points to a looming issue with artificial intelligence. The car’s underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation. There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries. But this won’t happen—or shouldn’t happen—unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users. Otherwise it will be hard to predict when failures might occur—and it’s inevitable they will. That’s one reason Nvidia’s car is still experimental. Already, mathematical models are being used to help determine who makes parole, who’s approved for a loan, and who gets hired for a job. If you could get access to these mathematical models, it would be possible to understand their reasoning. But banks, the military, employers, and others are now turning their attention to more complex machine-learning approaches that could make automated decision-making altogether inscrutable. Deep learning, the most common of these approaches, represents a fundamentally different way to program computers. “It is a problem that is already relevant, and it’s going to be much more relevant in the future," says Tommi Jaakkola, a professor at MIT who works on applications of machine learning. “Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a ‘black box’ method."There’s already an argument that being able to interrogate an AI system about how it reached its conclusions is a fundamental legal right. Starting in the summer of 2018, the European Union may require that companies be able to give users an explanation for decisions that automated systems reach. This might be impossible, even for systems that seem relatively simple on the surface, such as the apps and websites that use deep learning to serve ads or recommend songs. The computers that run those services have programmed themselves, and they have done it in ways we cannot understand. Even the engineers who build these apps cannot fully explain their behavior.This raises mind-boggling questions. As the technology advances, we might soon cross some threshold beyond which using AI requires a leap of faith. Sure, we humans can’t always truly explain our thought processes either—but we find ways to intuitively trust and gauge people. Will that also be possible with machines that think and make decisions differently from the way a human would? We’ve never before built machines that operate in ways their creators don’t understand. How well can we expect to communicate—and get along with—intelligent machines that could be unpredictable and inscrutable? These questions took me on a journey to the bleeding edge of research on AI algorithms, from Google to Apple and many places in between, including a meeting with one of the great philosophers of our time. The artist Adam Ferriss created this image, and the one below, using Google Deep Dream, a program that adjusts an image to stimulate the pattern recognition capabilities of a deep neural network. The pictures were produced using a mid-level layer of the neural network. Adam FerrissIn 2015, a research group at Mount Sinai Hospital in New York was inspired to apply deep learning to the hospital’s vast database of patient records. This data set features hundreds of variables on patients, drawn from their test results, doctor visits, and so on. The resulting program, which the researchers named Deep Patient, was trained using data from about 700,000 individuals, and when tested on new records, it proved incredibly good at predicting disease. Without any expert instruction, Deep Patient had discovered patterns hidden in the hospital data that seemed to indicate when people were on the way to a wide range of ailments, including cancer of the liver. There are a lot of methods that are “pretty good" at predicting disease from a patient’s records, says Joel Dudley, who leads the Mount Sinai team. But, he adds, “this was just way better."“We can build these models, but we don’t know how they work."At the same time, Deep Patient is a bit puzzling. It appears to anticipate the onset of psychiatric disorders like schizophrenia surprisingly well. But since schizophrenia is notoriously difficult for physicians to predict, Dudley wondered how this was possible. He still doesn’t know. The new tool offers no clue as to how it does this. If something like Deep Patient is actually going to help doctors, it will ideally give them the rationale for its prediction, to reassure them that it is accurate and to justify, say, a change in the drugs someone is being prescribed. “We can build these models," Dudley says ruefully, “but we don’t know how they work."Artificial intelligence hasn’t always been this way. From the outset, there were two schools of thought regarding how understandable, or explainable, AI ought to be. Many thought it made the most sense to build machines that reasoned according to rules and logic, making their inner workings transparent to anyone who cared to examine some code. Others felt that intelligence would more easily emerge if machines took inspiration from biology, and learned by observing and experiencing. This meant turning computer programming on its head. Instead of a programmer writing the commands to solve a problem, the program generates its own algorithm based on example data and a desired output. The machine-learning techniques that would later evolve into today’s most powerful AI systems followed the latter path: the machine essentially programs itself.At first this approach was of limited practical use, and in the 1960s and ’70s it remained largely confined to the fringes of the field. Then the computerization of many industries and the emergence of large data sets renewed interest. That inspired the development of more powerful machine-learning techniques, especially new versions of one known as the artificial neural network. By the 1990s, neural networks could automatically digitize handwritten characters.But it was not until the start of this decade, after several clever tweaks and refinements, that very large—or “deep"—neural networks demonstrated dramatic improvements in automated perception. Deep learning is responsible for today’s explosion of AI. It has given computers extraordinary powers, like the ability to recognize spoken words almost as well as a person could, a skill too complex to code into the machine by hand. Deep learning has transformed computer vision and dramatically improved machine translation. It is now being used to guide all sorts of key decisions in medicine, finance, manufacturing—and beyond. Adam FerrissThe workings of any machine-learning technology are inherently more opaque, even to computer scientists, than a hand-coded system. This is not to say that all future AI techniques will be equally unknowable. But by its nature, deep learning is a particularly dark black box.You can’t just look inside a deep neural network to see how it works. A network’s reasoning is embedded in the behavior of thousands of simulated neurons, arranged into dozens or even hundreds of intricately interconnected layers. The neurons in the first layer each receive an input, like the intensity of a pixel in an image, and then perform a calculation before outputting a new signal. These outputs are fed, in a complex web, to the neurons in the next layer, and so on, until an overall output is produced. Plus, there is a process known as back-propagation that tweaks the calculations of individual neurons in a way that lets the network learn to produce a desired output.The many layers in a deep network enable it to recognize things at different levels of abstraction. In a system designed to recognize dogs, for instance, the lower layers recognize simple things like outlines or color; higher layers recognize more complex stuff like fur or eyes; and the topmost layer identifies it all as a dog. The same approach can be applied, roughly speaking, to other inputs that lead a machine to teach itself: the sounds that make up words in speech, the letters and words that create sentences in text, or the steering-wheel movements required for driving.“It might be part of the nature of intelligence that only part of it is exposed to rational explanation. Some of it is just instinctual."Ingenious strategies have been used to try to capture and thus explain in more detail what’s happening in such systems. In 2015, researchers at Google modified a deep-learning-based image recognition algorithm so that instead of spotting objects in photos, it would generate or modify them. By effectively running the algorithm in reverse, they could discover the features the program uses to recognize, say, a bird or building. The resulting images, produced by a project known as Deep Dream, showed grotesque, alien-like animals emerging from clouds and plants, and hallucinatory pagodas blooming across forests and mountain ranges. The images proved that deep learning need not be entirely inscrutable; they revealed that the algorithms home in on familiar visual features like a bird’s beak or feathers. But the images also hinted at how different deep learning is from human perception, in that it might make something out of an artifact that we would know to ignore. Google researchers noted that when its algorithm generated images of a dumbbell, it also generated a human arm holding it. The machine had concluded that an arm was part of the thing.Further progress has been made using ideas borrowed from neuroscience and cognitive science. A team led by Jeff Clune, an assistant professor at the University of Wyoming, has employed the AI equivalent of optical illusions to test deep neural networks. In 2015, Clune’s group showed how certain images could fool such a network into perceiving things that aren’t there, because the images exploit the low-level patterns the system searches for. One of Clune’s collaborators, Jason Yosinski, also built a tool that acts like a probe stuck into a brain. His tool targets any neuron in the middle of the network and searches for the image that activates it the most. The images that turn up are abstract (imagine an impressionistic take on a flamingo or a school bus), highlighting the mysterious nature of the machine’s perceptual abilities. This early artificial neural network, at the Cornell Aeronautical Laboratory in Buffalo, New York, circa 1960, processed inputs from light sensors.Ferriss was inspired to run Cornell's artificial neural network through Deep Dream, producing the images above and below. Adam FerrissWe need more than a glimpse of AI’s thinking, however, and there is no easy solution. It is the interplay of calculations inside a deep neural network that is crucial to higher-level pattern recognition and complex decision-making, but those calculations are a quagmire of mathematical functions and variables. “If you had a very small neural network, you might be able to understand it," Jaakkola says. “But once it becomes very large, and it has thousands of units per layer and maybe hundreds of layers, then it becomes quite un-understandable."In the office next to Jaakkola is Regina Barzilay, an MIT professor who is determined to apply machine learning to medicine. She was diagnosed with breast cancer a couple of years ago, at age 43. The diagnosis was shocking in itself, but Barzilay was also dismayed that cutting-edge statistical and machine-learning methods were not being used to help with oncological research or to guide patient treatment. She says AI has huge potential to revolutionize medicine, but realizing that potential will mean going beyond just medical records. She envisions using more of the raw data that she says is currently underutilized: “imaging data, pathology data, all this information."How well can we get along with machines that are unpredictable and inscrutable?After she finished cancer treatment last year, Barzilay and her students began working with doctors at Massachusetts General Hospital to develop a system capable of mining pathology reports to identify patients with specific clinical characteristics that researchers might want to study. However, Barzilay understood that the system would need to explain its reasoning. So, together with Jaakkola and a student, she added a step: the system extracts and highlights snippets of text that are representative of a pattern it has discovered. Barzilay and her students are also developing a deep-learning algorithm capable of finding early signs of breast cancer in mammogram images, and they aim to give this system some ability to explain its reasoning, too. “You really need to have a loop where the machine and the human collaborate," -Barzilay says.The U.S. military is pouring billions into projects that will use machine learning to pilot vehicles and aircraft, identify targets, and help analysts sift through huge piles of intelligence data. Here more than anywhere else, even more than in medicine, there is little room for algorithmic mystery, and the Department of Defense has identified explainability as a key stumbling block.David Gunning, a program manager at the Defense Advanced Research Projects Agency, is overseeing the aptly named Explainable Artificial Intelligence program. A silver-haired veteran of the agency who previously oversaw the DARPA project that eventually led to the creation of Siri, Gunning says automation is creeping into countless areas of the military. Intelligence analysts are testing machine learning as a way of identifying patterns in vast amounts of surveillance data. Many autonomous ground vehicles and aircraft are being developed and tested. But soldiers probably won’t feel comfortable in a robotic tank that doesn’t explain itself to them, and analysts will be reluctant to act on information without some reasoning. “It’s often the nature of these machine-learning systems that they produce a lot of false alarms, so an intel analyst really needs extra help to understand why a recommendation was made," Gunning says.This March, DARPA chose 13 projects from academia and industry for funding under Gunning’s program. Some of them could build on work led by Carlos Guestrin, a professor at the University of Washington. He and his colleagues have developed a way for machine-learning systems to provide a rationale for their outputs. Essentially, under this method a computer automatically finds a few examples from a data set and serves them up in a short explanation. A system designed to classify an e-mail message as coming from a terrorist, for example, might use many millions of messages in its training and decision-making. But using the Washington team’s approach, it could highlight certain keywords found in a message. Guestrin’s group has also devised ways for image recognition systems to hint at their reasoning by highlighting the parts of an image that were most significant.Adam FerrissOne drawback to this approach and others like it, such as Barzilay’s, is that the explanations provided will always be simplified, meaning some vital information may be lost along the way. “We haven’t achieved the whole dream, which is where AI has a conversation with you, and it is able to explain," says Guestrin. “We’re a long way from having truly interpretable AI."It doesn’t have to be a high-stakes situation like cancer diagnosis or military maneuvers for this to become an issue. Knowing AI’s reasoning is also going to be crucial if the technology is to become a common and useful part of our daily lives. Tom Gruber, who leads the Siri team at Apple, says explainability is a key consideration for his team as it tries to make Siri a smarter and more capable virtual assistant. Gruber wouldn’t discuss specific plans for Siri’s future, but it’s easy to imagine that if you receive a restaurant recommendation from Siri, you’ll want to know what the reasoning was. Ruslan Salakhutdinov, director of AI research at Apple and an associate professor at Carnegie Mellon University, sees explainability as the core of the evolving relationship between humans and intelligent machines. “It’s going to introduce trust," he says.Just as many aspects of human behavior are impossible to explain in detail, perhaps it won’t be possible for AI to explain everything it does. “Even if somebody can give you a reasonable-sounding explanation [for his or her actions], it probably is incomplete, and the same could very well be true for AI," says Clune, of the University of Wyoming. “It might just be part of the nature of intelligence that only part of it is exposed to rational explanation. Some of it is just instinctual, or subconscious, or inscrutable."If that’s so, then at some stage we may have to simply trust AI’s judgment or do without using it. Likewise, that judgment will have to incorporate social intelligence. Just as society is built upon a contract of expected behavior, we will need to design AI systems to respect and fit with our social norms. If we are to create robot tanks and other killing machines, it is important that their decision-making be consistent with our ethical judgments.To probe these metaphysical concepts, I went to Tufts University to meet with Daniel Dennett, a renowned philosopher and cognitive scientist who studies consciousness and the mind. A chapter of Dennett’s latest book, From Bacteria to Bach and Back, an encyclopedic treatise on consciousness, suggests that a natural part of the evolution of intelligence itself is the creation of systems capable of performing tasks their creators do not know how to do. “The question is, what accommodations do we have to make to do this wisely—what standards do we demand of them, and of ourselves?" he tells me in his cluttered office on the university’s idyllic campus.He also has a word of warning about the quest for explainability. “I think by all means if we’re going to use these things and rely on them, then let’s get as firm a grip on how and why they’re giving us the answers as possible," he says. But since there may be no perfect answer, we should be as cautious of AI explanations as we are of each other’s—no matter how clever a machine seems. “If it can’t do better than us at explaining what it’s doing," he says, “then don’t trust it." MIT Technology Review https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/

Picture 1

Stephan Wolfram speaking at Harvard in 2010:

Stephan Wolfram speaking at Harvard in 2010: Well, in my life so far, I’ve basically done three large projects. And each of them in a different way informs my view of the future. Mathematica in showing me what large-scale formalization can achieve. https://www.wolfram.com/mathematica/ Wolfram|Alpha in helping me understand the span of human knowledge and the automation of a certain kind of intelligence. https://www.wolframalpha.com/ But for our purposes here today, the most important is .... A New Kind of Science—NKS. Because it provides the paradigm for what I’ll be talking about. And what NKS is really about is the core concept of computation. You know, when we think of computation today, we typically think of all those sophisticated computers and programs that we’ve set up to do particular tasks. But what NKS is about is the pure basic science of computation—the science of what’s out there in the computational universe of all possible programs. https://www.wolframscience.com/ For further reading ... http://blog.stephenwolfram.com/2017/05/a-new-kind-of-science-a-15-year-view/ Wolfram, Stephen. Computation and the Future of the Human Condition (Kindle Locations 24-40). Wolfram Media, Inc.. Kindle Edition.

Picture 1

Age of the Information Oligarch

Will Democracy Survive Big Data and Artificial Intelligence? Editor’s Note: This article first appeared in Spektrum der Wissenschaft, Scientific American’s sister publication, as “Digitale Demokratie statt Datendiktatur." “Enlightenment is man’s emergence from his self-imposed immaturity. Immaturity is the inability to use one’s understanding without guidance from another." —Immanuel Kant, “What is Enlightenment?" (1784) The digital revolution is in full swing. How will it change our world? The amount of data we produce doubles every year. In other words: in 2016 we produced as much data as in the entire history of humankind through 2015. Every minute we produce hundreds of thousands of Google searches and Facebook posts. These contain information that reveals how we think and feel. Soon, the things around us, possibly even our clothing, also will be connected with the Internet. It is estimated that in 10 years’ time there will be 150 billion networked measuring sensors, 20 times more than people on Earth. Then, the amount of data will double every 12 hours. Many companies are already trying to turn this Big Data into Big Money. Everything will become intelligent; soon we will not only have smart phones, but also smart homes, smart factories and smart cities. Should we also expect these developments to result in smart nations and a smarter planet? The field of artificial intelligence is, indeed, making breathtaking advances. In particular, it is contributing to the automation of data analysis. Artificial intelligence is no longer programmed line by line, but is now capable of learning, thereby continuously developing itself. Recently, Google's DeepMind algorithm taught itself how to win 49 Atari games. Algorithms can now recognize handwritten language and patterns almost as well as humans and even complete some tasks better than them. They are able to describe the contents of photos and videos. Today 70% of all financial transactions are performed by algorithms. News content is, in part, automatically generated. This all has radical economic consequences: in the coming 10 to 20 years around half of today's jobs will be threatened by algorithms. 40% of today's top 500 companies will have vanished in a decade. It can be expected that supercomputers will soon surpass human capabilities in almost all areas—somewhere between 2020 and 2060. Experts are starting to ring alarm bells. Technology visionaries, such as Elon Musk from Tesla Motors, Bill Gates from Microsoft and Apple co-founder Steve Wozniak, are warning that super-intelligence is a serious danger for humanity, possibly even more dangerous than nuclear weapons. Is This Alarmism? One thing is clear: the way in which we organize the economy and society will change fundamentally. We are experiencing the largest transformation since the end of the Second World War; after the automation of production and the creation of self-driving cars the automation of society is next. With this, society is at a crossroads, which promises great opportunities, but also considerable risks. If we take the wrong decisions it could threaten our greatest historical achievements. In the 1940s, the American mathematician Norbert Wiener (1894–1964) invented cybernetics. According to him, the behavior of systems could be controlled by the means of suitable feedbacks. Very soon, some researchers imagined controlling the economy and society according to this basic principle, but the necessary technology was not available at that time. Today, Singapore is seen as a perfect example of a data-controlled society. What started as a program to protect its citizens from terrorism has ended up influencing economic and immigration policy, the property market and school curricula. China is taking a similar route. Recently, Baidu, the Chinese equivalent of Google, invited the military to take part in the China Brain Project. It involves running so-called deep learning algorithms over the search engine data collected about its users. Beyond this, a kind of social control is also planned. According to recent reports, every Chinese citizen will receive a so-called "Citizen Score", which will determine under what conditions they may get loans, jobs, or travel visa to other countries. This kind of individual monitoring would include people’s Internet surfing and the behavior of their social contacts (see "Spotlight on China"). With consumers facing increasingly frequent credit checks and some online shops experimenting with personalized prices, we are on a similar path in the West. It is also increasingly clear that we are all in the focus of institutional surveillance. This was revealed in 2015 when details of the British secret service's "Karma Police" program became public, showing the comprehensive screening of everyone's Internet use. Is Big Brother now becoming a reality? Continue reading in Scientific American https://www.scientificamerican.com/article/will-democracy-survive-big-data-and-artificial-intelligence/

Picture 1

From Salt to Infonomics

Our most valued commodities have gone from salt and sugar to chemicals and fuels to data and services. Whereas land was the raw material of the agricultural age and iron was the raw material of the industrial age, data is the raw material of the information age. "Data is the currency of the digital age," said Jim Barbaresso, who leads Intelligent Transportation Systems at HTNB. "Vehicle data could be the beginning of a modern day gold rush." "Data is really king in this market," said Tasha Keeney, an analyst at ARK Invest, which forecasts that the autonomous taxi market could be worth $10 trillion in the early 2030s. (For comparison, vehicle sales are a $2 trillion market today.) The ‘data is an asset’ or a ‘data is a business asset’ message is not new. It goes back over two decades. However, despite the fact that so many people have said it so often before, we still see that there is a difference between preach and practice. It’s not that organizations fail to understand the importance of data, information and actionable intelligence (well, some do) in the age of big data. It’s mainly that many businesses don’t fully grasp how much of a business asset data really is. It seems obvious: In an age of Digital Transformation, the information assets of an organization are increasingly THE defining source of organizational value. There is a growing gap between the traditional ways we value organizations -- in terms of the tangible and intangible assets reported in financial statements -- and the value the market puts on organizations. Over and over, in the face of the latest acquisition, we ask ourselves, “How on earth could company X pay so much for company Y?" According to Doug Laney from Gartner, in 1975, on average the tangible assets of a corporation represented 83% of its value. Today that number is 20%. As a result, over 50% of merger and acquisition exchanges can’t be accounted for. The most recent example is the Microsoft acquisition of LinkedIn. We look at the accounting value of LinkedIn -- $3.2 billion in revenues -- and compare it to the price paid by Microsoft -- $26 billion – and shake our heads and wonder: - Is this a signal of another dot-com bubble? - Why is the accounting value of the company so different from the market value? - Is Microsoft crazy? Or crazy like a fox? I believe that the core of this disequilibrium lies in our inability to properly measure and value the information assets of an organization. And this inability is reflected not only in a growing gap between what we report about companies and what we inherently know about companies, but also in systematically undervaluing the investments that companies make in processes to optimize information, protect it, and utilize it to create customer value. Simply stated, if you can’t measure it, you won’t value it. If “information" is the currency of the Digital Age, why don’t organizations manage their information assets with the same seriousness as their financial assets, their physical assets, and their human assets? Why is “Infonomics" such a difficult concept for organizations to grasp?

Picture 1

Emotions = Biochemical Algorithms

In recent decades life scientists have demonstrated that emotions are not some mysterious spiritual phenomenon that is useful just for writing poetry and composing symphonies. Rather, emotions are biochemical algorithms that are vital for the survival and reproduction of all mammals. The twenty-first century will be dominated by algorithms. ‘Algorithm’ is arguably the single most important concept in our world. If we want to understand our life and our future, we should make every effort to understand what an algorithm is, and how algorithms are connected with emotions. An algorithm is a methodical set of steps that can be used to make calculations, resolve problems and reach decisions. An algorithm isn’t a particular calculation, but the method followed when making the calculation. For example, if you want to calculate the average between two numbers, you can use a simple algorithm. The algorithm says: ‘First step: add the two numbers together. Second step: divide the sum by two.’ When you enter the numbers 4 and 8, you get 6. When you enter 117 and 231, you get 174. A more complex example is a cooking recipe. An algorithm for preparing vegetable soup may tell us: 1.​Heat half a cup of oil in a pot. 2.​Finely chop four onions. 3.​Fry the onion until golden. 4.​Cut three potatoes into chunks and add to the pot. 5.​Slice a cabbage into strips and add to the pot. And so forth. You can follow the same algorithm dozens of times, each time using slightly different vegetables, and therefore getting a slightly different soup. But the algorithm remains the same. Harari, Yuval Noah. Homo Deus: A Brief History of Tomorrow (pp. 83-84). HarperCollins. Kindle Edition.

Picture 1

Natural Selection Succumbs to Intelligent Design

Some people fear that today we are again in mortal danger of massive volcanic eruptions or colliding asteroids. Hollywood producers make billions out of these anxieties. Yet in reality, the danger is slim. Mass extinctions occur once every many millions of years. Yes, a big asteroid will probably hit our planet sometime in the next 100 million years, but it is very unlikely to happen next Tuesday. Instead of fearing asteroids, we should fear ourselves. For Homo sapiens has rewritten the rules of the game. This single ape species has managed within 70,000 years to change the global ecosystem in radical and unprecedented ways. Our impact is already on a par with that of ice ages and tectonic movements. Within a century, our impact may surpass that of the asteroid that killed off the dinosaurs 65 million years ago. That asteroid changed the trajectory of terrestrial evolution, but not its fundamental rules, which have remained fixed since the appearance of the first organisms 4 billion years ago. During all those aeons, whether you were a virus or a dinosaur, you evolved according to the unchanging principles of natural selection. In addition, no matter what strange and bizarre shapes life adopted, it remained confined to the organic realm – whether a cactus or a whale, you were made of organic compounds. Now humankind is poised to replace natural selection with intelligent design, and to extend life from the organic realm into the inorganic. Harari, Yuval Noah. Homo Deus: A Brief History of Tomorrow (p. 73). HarperCollins. Kindle Edition.

Picture 1

A Brief History of Lawns

If history doesn’t follow any stable rules, and if we cannot predict its future course, why study it? It often seems that the chief aim of science is to predict the future – meteorologists are expected to forecast whether tomorrow will bring rain or sunshine; economists should know whether devaluing the currency will avert or precipitate an economic crisis; good doctors foresee whether chemotherapy or radiation therapy will be more successful in curing lung cancer. Similarly, historians are asked to examine the actions of our ancestors so that we can repeat their wise decisions and avoid their mistakes. But it almost never works like that because the present is just too different from the past. It is a waste of time to study Hannibal’s tactics in the Second Punic War so as to copy them in the Third World War. What worked well in cavalry battles will not necessarily be of much benefit in cyber warfare. Science is not just about predicting the future, though. Scholars in all fields often seek to broaden our horizons, thereby opening before us new and unknown futures. This is especially true of history. Though historians occasionally try their hand at prophecy (without notable success), the study of history aims above all to make us aware of possibilities we don’t normally consider. Historians study the past not in order to repeat it, but in order to be liberated from it. Each and every one of us has been born into a given historical reality, ruled by particular norms and values, and managed by a unique economic and political system. We take this reality for granted, thinking it is natural, inevitable and immutable. We forget that our world was created by an accidental chain of events, and that history shaped not only our technology, politics and society, but also our thoughts, fears and dreams. The cold hand of the past emerges from the grave of our ancestors, grips us by the neck and directs our gaze towards a single future. We have felt that grip from the moment we were born, so we assume that it is a natural and inescapable part of who we are. Therefore we seldom try to shake ourselves free, and envision alternative futures. Studying history aims to loosen the grip of the past. It enables us to turn our head this way and that, and begin to notice possibilities that our ancestors could not imagine, or didn’t want us to imagine. By observing the accidental chain of events that led us here, we realise how our very thoughts and dreams took shape – and we can begin to think and dream differently. Studying history will not tell us what to choose, but at least it gives us more options. Movements seeking to change the world often begin by rewriting history, thereby enabling people to reimagine the future. Whether you want workers to go on a general strike, women to take possession of their bodies, or oppressed minorities to demand political rights – the first step is to retell their history. The new history will explain that ‘our present situation is neither natural nor eternal. Things were different once. Only a string of chance events created the unjust world we know today. If we act wisely, we can change that world, and create a much better one.’ This is why Marxists recount the history of capitalism; why feminists study the formation of patriarchal societies; and why African Americans commemorate the horrors of the slave trade. They aim not to perpetuate the past, but rather to be liberated from it. What’s true of grand social revolutions is equally true at the micro level of everyday life. A young couple building a new home for themselves may ask the architect for a nice lawn in the front yard. Why a lawn? ‘Because lawns are beautiful,’ the couple might explain. But why do they think so? It has a history behind it. Stone Age hunter-gatherers did not cultivate grass at the entrance to their caves. No green meadow welcomed the visitors to the Athenian Acropolis, the Roman Capitol, the Jewish Temple in Jerusalem or the Forbidden City in Beijing. The idea of nurturing a lawn at the entrance to private residences and public buildings was born in the castles of French and English aristocrats in the late Middle Ages. In the early modern age this habit struck deep roots, and became the trademark of nobility. Well-kept lawns demanded land and a lot of work, particularly in the days before lawnmowers and automatic water sprinklers. In exchange, they produce nothing of value. You can’t even graze animals on them, because they would eat and trample the grass. Poor peasants could not afford wasting precious land or time on lawns. The neat turf at the entrance to chateaux was accordingly a status symbol nobody could fake. It boldly proclaimed to every passerby: ‘I am so rich and powerful, and I have so many acres and serfs, that I can afford this green extravaganza.’ The bigger and neater the lawn, the more powerful the dynasty. If you came to visit a duke and saw that his lawn was in bad shape, you knew he was in trouble. The precious lawn was often the setting for important celebrations and social events, and at all other times was strictly off-limits. To this day, in countless palaces, government buildings and public venues a stern sign commands people to ‘Keep off the grass’. In my former Oxford college the entire quad was formed of a large, attractive lawn, on which we were allowed to walk or sit on only one day a year. On any other day, woe to the poor student whose foot desecrated the holy turf. Royal palaces and ducal chateaux turned the lawn into a symbol of authority. When in the late modern period kings were toppled and dukes were guillotined, the new presidents and prime ministers kept the lawns. Parliaments, supreme courts, presidential residences and other public buildings increasingly proclaimed their power in row upon row of neat green blades. Simultaneously, lawns conquered the world of sports. For thousands of years humans played on almost every conceivable kind of ground, from ice to desert. Yet in the last two centuries, the really important games – such as football and tennis – are played on lawns. Provided, of course, you have money. In the favelas of Rio de Janeiro the future generation of Brazilian football is kicking makeshift balls over sand and dirt. But in the wealthy suburbs, the sons of the rich are enjoying themselves over meticulously kept lawns. Humans thereby came to identify lawns with political power, social status and economic wealth. No wonder that in the nineteenth century the rising bourgeoisie enthusiastically adopted the lawn. At first only bankers, lawyers and industrialists could afford such luxuries at their private residences. Yet when the Industrial Revolution broadened the middle class and gave rise to the lawnmower and then the automatic sprinkler, millions of families could suddenly afford a home turf. In American suburbia a spickand-span lawn switched from being a rich person’s luxury into a middle-class necessity. This was when a new rite was added to the suburban liturgy. After Sunday morning service at church, many people devotedly mowed their lawns. Walking along the streets, you could quickly ascertain the wealth and position of every family by the size and quality of their turf. There is no surer sign that something is wrong at the Joneses’ than a neglected lawn in the front yard. Grass is nowadays the most widespread crop in the USA after maize and wheat, and the lawn industry (plants, manure, mowers, sprinklers, gardeners) accounts for billions of dollars every year. The lawn did not remain solely a European or American craze. Even people who have never visited the Loire Valley see US presidents giving speeches on the White House lawn, important football games played out in green stadiums, and Homer and Bart Simpson quarrelling about whose turn it is to mow the grass. People all over the globe associate lawns with power, money and prestige. The lawn has therefore spread far and wide, and is now set to conquer even the heart of the Muslim world. Qatar’s newly built Museum of Islamic Art is flanked by magnificent lawns that hark back to Louis XIV’s Versailles much more than to Haroun al-Rashid’s Baghdad. They were designed and constructed by an American company, and their more than 100,000 square yards of grass – in the midst of the Arabian desert – require a stupendous amount of fresh water each day to stay green. Meanwhile, in the suburbs of Doha and Dubai, middle-class families pride themselves on their lawns. If it were not for the white robes and black hijabs, you could easily think you were in the Midwest rather than the Middle East. Having read this short history of the lawn, when you now come to plan your dream house you might think twice about having a lawn in the front yard. You are of course still free to do it. But you are also free to shake off the cultural cargo bequeathed to you by European dukes, capitalist moguls and the Simpsons – and imagine for yourself a Japanese rock garden, or some altogether new creation. This is the best reason to learn history: not in order to predict the future, but to free yourself of the past and imagine alternative destinies. Of course this is not total freedom – we cannot avoid being shaped by the past. But some freedom is better than none. Harari, Yuval Noah. Homo Deus: A Brief History of Tomorrow (pp. 58-64). HarperCollins. Kindle Edition.

Picture 1

What will the Future Look Like

Centuries ago human knowledge increased slowly, so politics and economics changed at a leisurely pace too. Today our knowledge is increasing at breakneck speed, and theoretically we should understand the world better and better. But the very opposite is happening. Our new-found knowledge leads to faster economic, social and political changes; in an attempt to understand what is happening, we accelerate the accumulation of knowledge, which leads only to faster and greater upheavals. Consequently we are less and less able to make sense of the present or forecast the future. In 1016 it was relatively easy to predict how Europe would look in 1050. Sure, dynasties might fall, unknown raiders might invade, and natural disasters might strike; yet it was clear that in 1050 Europe would still be ruled by kings and priests, that it would be an agricultural society, that most of its inhabitants would be peasants, and that it would continue to suffer greatly from famines, plagues and wars. In contrast, in 2016 we have no idea how Europe will look in 2050. We cannot say what kind of political system it will have, how its job market will be structured, or even what kind of bodies its inhabitants will possess. Harari, Yuval Noah. Homo Deus: A Brief History of Tomorrow (p. 58). HarperCollins. Kindle Edition.

Picture 1

Change begets Change

Some complex systems, such as the weather, are oblivious to our predictions. The process of human development, in contrast, reacts to them. Indeed, the better our forecasts, the more reactions they engender. Hence paradoxically, as we accumulate more data and increase our computing power, events become wilder and more unexpected. The more we know, the less we can predict. Imagine, for example, that one day experts decipher the basic laws of the economy. Once this happens, banks, governments, investors and customers will begin to use this new knowledge to act in novel ways, and gain an edge over their competitors. For what is the use of new knowledge if it doesn’t lead to novel behaviours? Alas, once people change the way they behave, the economic theories become obsolete. We may know how the economy functioned in the past – but we no longer understand how it functions in the present, not to mention the future. This is not a hypothetical example. In the middle of the nineteenth century Karl Marx reached brilliant economic insights. Based on these insights he predicted an increasingly violent conflict between the proletariat and the capitalists, ending with the inevitable victory of the former and the collapse of the capitalist system. Marx was certain that the revolution would start in countries that spearheaded the Industrial Revolution – such as Britain, France and the USA – and spread to the rest of the world. Marx forgot that capitalists know how to read. At first only a handful of disciples took Marx seriously and read his writings. But as these socialist firebrands gained adherents and power, the capitalists became alarmed. They too perused Das Kapital, adopting many of the tools and insights of Marxist analysis. In the twentieth century everybody from street urchins to presidents embraced a Marxist approach to economics and history. Even diehard capitalists who vehemently resisted the Marxist prognosis still made use of the Marxist diagnosis. When the CIA analysed the situation in Vietnam or Chile in the 1960s, it divided society into classes. When Nixon or Thatcher looked at the globe, they asked themselves who controls the vital means of production. From 1989 to 1991 George Bush oversaw the demise of the Evil Empire of communism, only to be defeated in the 1992 elections by Bill Clinton. Clinton’s winning campaign strategy was summarised in the motto: ‘It’s the economy, stupid.’ Marx could not have said it better. As people adopted the Marxist diagnosis, they changed their behaviour accordingly. Capitalists in countries such as Britain and France strove to better the lot of the workers, strengthen their national consciousness and integrate them into the political system. Consequently when workers began voting in elections and Labour gained power in one country after another, the capitalists could still sleep soundly in their beds. As a result, Marx’s predictions came to naught. Communist revolutions never engulfed the leading industrial powers such as Britain, France and the USA, and the dictatorship of the proletariat was consigned to the dustbin of history. This is the paradox of historical knowledge. Knowledge that does not change behaviour is useless. But knowledge that changes behaviour quickly loses its relevance. The more data we have and the better we understand history, the faster history alters its course, and the faster our knowledge becomes outdated. Harari, Yuval Noah. Homo Deus: A Brief History of Tomorrow (pp. 56-58). HarperCollins. Kindle Edition.

Picture 1

Three Biological Parents

Mitochondria are tiny organelles within human cells, which produce the energy used by the cell. They have their own set of genes, which is completely separate from the DNA in the cell’s nucleus. Defective mitochondrial DNA leads to various debilitating or even deadly diseases. It is technically feasible with current in vitro technology to overcome mitochondrial genetic diseases by creating a ‘three-parent baby’. The baby’s nuclear DNA comes from two parents, while the mitochondrial DNA comes from a third person. In 2000 Sharon Saarinen from West Bloomfield, Michigan, gave birth to a healthy baby girl, Alana. Alana’s nuclear DNA came from her mother, Sharon, and her father, Paul, but her mitochondrial DNA came from another woman. From a purely technical perspective, Alana has three biological parents. NOTE: At present it is technically unfeasible, and illegal, to replace nuclear DNA, but if and when the technical difficulties are solved, the same logic that favoured the replacement of defective mitochondrial DNA would seem to warrant doing the same with nuclear DNA. Harari, Yuval Noah. Homo Deus: A Brief History of Tomorrow (pp. 53-54). HarperCollins. Kindle Edition.

Picture 1

Falling into the Future

When people realise how fast we are rushing towards the great unknown, and that they cannot count even on death to shield them from it, their reaction is to hope that somebody will hit the brakes and slow us down. But we cannot hit the brakes, for several reasons. Firstly, nobody knows where the brakes are. While some experts are familiar with developments in one field, such as artificial intelligence, nanotechnology, big data or genetics, no one is an expert on everything. No one is therefore capable of connecting all the dots and seeing the full picture. Different fields influence one another in such intricate ways that even the best minds cannot fathom how breakthroughs in artificial intelligence might impact nanotechnology, or vice versa. Nobody can absorb all the latest scientific discoveries, nobody can predict how the global economy will look in ten years, and nobody has a clue where we are heading in such a rush. Since no one understands the system any more, no one can stop it. Secondly, if we somehow succeed in hitting the brakes, our economy will collapse, along with our society. As explained in a later chapter, the modern economy needs constant and indefinite growth in order to survive. If growth ever stops, the economy won’t settle down to some cosy equilibrium; it will fall to pieces. That’s why capitalism encourages us to seek immortality, happiness and divinity. There’s a limit to how many shoes we can wear, how many cars we can drive and how many skiing holidays we can enjoy. An economy built on everlasting growth needs endless projects – just like the quests for immortality, bliss and divinity. Harari, Yuval Noah. Homo Deus: A Brief History of Tomorrow (p. 51). HarperCollins. Kindle Edition.

Picture 1

The Brain Stem Behind Creation

Dividing Light from the Darkness, Michelangelo. Image uploaded to Wikipedia, click image for specific references.University and scientific research center programs are increasingly finding it useful to employ artists and illustrators to help them see things in a new way. Few works of art from the Renaissance have been studied and pored over as meticulously as Michelangelo’s frescos in the Sistine Chapel. Yet, the Master may still have some surprises hidden for an illustrator-scientist.Biomedical Illustrator Ian Suk (BSc, BMC) and Neurological Surgeon Rafael Tamargo (MD, FACS), both of Johns Hopkins proposed in a 2010 article in the journal Neurosurgery, that the panel above, Dividing Light from the Darkness by Michelangelo actually depicts the brain stem of God.All images from the paper itself and comparison rights belong to Suk and Tamargo.Using a series of comparisons of the unusual shadows and contours on God’s neck to photos of actual brain stems, the evidence seems completely overwhelming that Michelangelo used his own limited anatomical studies to depict the brain stem. It’s unlikely even the educated members of Michelangelo’s audience would recognize it. I encourage you to look over the paper here, and enlarge the images in the slideshow: Suk and Tamargo are utterly convincing. Unlike R. Douglas Fields in this previous blog post from 2010 on Scientific American, I don’t think there’s room to believe this is a case of pareidolia.I imagine the thrill of feeling Michelangelo communicating directly with the authors across the centuries was immense.Links:Neurosurgery, Vol. 66:pp851-861,May 2010. Link.Press releaseIan Suk – Johns Hopkins Department of Art as Applied to MedicineRafael Tamargo, MD. – Johns Hopkins MedicineMichelangelo’s Secret Message in the Sistine Chapel: a juxtaposition of God and the brain by R. Douglas Fields, Guest Blog, Scientific AmericanFor the third year running, we are turning September into a month-long celebration of science artists by delivering new sciart to invade your eyeballs. The SciArt Blitz! Can’t get enough? Check out what was previously featured on this day:2013: The Drawings Behind Charles R. Knight’s Famous Paintings2012: Coronal Mass Ejection from NASAAbout the Author: Glendon Mellow is a fine artist, illustrator and tattoo designer working in oil and digital media based in Toronto, Canada. He tweets @FlyingTrilobite. You can see Glendon's work-in-progress at The Flying Trilobite blog and portfolio at www.glendonmellow.com. Follow on Twitter @symbiartic. More The views expressed are those of the author and are not necessarily those of Scientific American.Add Comment1. SJCrum 6:16 pm 09/10/2014This is something like a zillion percent beyond hilarious. For a TOTAL RETURN to REALITY, Michelangelo actually had an old man pose for him so he could then get every part of the images as accurate as possible. And, that old man had a very odd throat, which actually did look like it does in the painting.For a dose of reality also, Michelangelo had a habit of painting upside down, and because of that, when he looks at the man’s throat, he forgot to reverse the painting. So, if you look VERY closely you can actually see how convoluted it is, and totally upside down. NO KIDDING.As for one more thing, he also repeated this process of upside downness 36 times in his painting life, and was only caught one time when a female told him very politely that her view of her own breasts that were viewed in an upside down view of hers, matched EXACTLY to the upside down breasts in another one of his paintings.One more thing, is that he also did another type of reversal where he sculpted a statue from marble, and a stone that cannot be corrected after a mistake is done, by sculpting a male’s hand that was under the main body, and where he had to lay on his back to do the work, totally backwards.So, the brain stem thing is a joke, but, not all that oddly either. If he had any idea as to what one looked like, who knows where he might have upside-downed one of those. But, it wasn’t here.Link to this2. Glendon Mellow 9:45 pm 09/10/2014SJCrum, you know what’s a zillion percent awesome? Links to sources. Names of books and authors.*Could* it be a case of pareidolia? Wishful thinking by someone with neuroanatomy on their own mind? Perhaps. You sure haven’t convinced me. NO KIDDING.Link to thisAdd a Comment You must sign in or register as a ScientificAmerican.com member to submit a comment.

Picture 1

How Do We Measure the Distance to the Stars?

Yeah, I'll just leave that here.Photo by SciShow, from the videoHey, remember that SciShow video I posted about, when I visited the adorable Hank Green in Montana and filmed a short thing with me talking about the smallest star in the Universe? While I was up there, Hank and I sat down to do a short conversation to promote Comic Relief, a charity that’s raising money to help educate (and feed) kids in Zambia. Hank and I talked distance. Specifically, how do you figure it out? Stars are far away, yet we seem to be pretty confident when we give their distances. It turns out, the answer is right in front of your nose. Watch. That was fun! And Hank was honestly excited about the topic, and the very fact that we can know what we know. That’s one of the reasons I like him. Also, as you saw in the opening part of the video, this was done to raise money for kids in Africa, which is pretty cool by me. As it says in the YouTube video show notes:Help more students learn, by giving to Comic Relief at http://www.comicrelief.com/SOYT Or if you’re in the US, you can text SOYT12 to 71777, message and data rates may apply. If you’re in the UK, text SOYT12 to 70005. Texts cost £5 plus your standard network message charge. £5 per text goes to Comic Relief. You must be 16 or over and please ask the bill payers permission. For full terms and conditions and more information go to www.comicrelief.com/terms-of-useBecause stars may be far away, but no one on Earth really is. Help ‘em out if you can.

Picture 1

How Advanced Are We Earthlings? Here's a Cosmic Yardstick

We humans like to think ourselves pretty advanced – and with no other technology-bearing beings to compare ourselves to, our back-patting doesn’t have to take context into account. After all, we harnessed fire, invented stone tools and the wheel, developed agriculture and writing, built cities, and learned to use metals.Then, a mere few moments ago from the perspective of cosmic time, we advanced even more rapidly, developing telescopes and steam power; discovering gravity and electromagnetism and the forces that hold the nuclei of atoms together.Meanwhile, the age of electricity was transforming human civilization. You could light up a building at night, speak with somebody in another city, or ride in a vehicle that needed no horse to pull it, and humans were very proud of themselves for achieving all of this. In fact, by the year 1899, purportedly, these developments prompted U.S. patent office commissioner Charles H. Duell to remark, “Everything that can be invented has been invented."We really have come a long way from the cave, but how far can we still go? Is there a limit to our technological progress? Put another way, if Duell was dead wrong in the year 1899, might his words be prophetic for the year 2099, or 2199? And what does that mean for humanity’s distant future?Teenage YearsThe answer to that question, in part, hinges on our longevity as a species. To advance far ahead in science, technology, and the wisdom to use them, we need time.The history of life on Earth is a history of extinction, and despite the advances we’ve made to date, we’re still quite vulnerable, both to nature and to ourselves. Thus, the measure of how advanced we are, and how advanced we might someday become, is linked to our ability to avoid extinction.With that in mind, Carl Sagan used to say humans are in a period of “technological adolescence." We’re developing great physical powers, and depending on how wisely we use them we could mature into a species with a reasonable chance of reaching old age. Or, we’ll destroy ourselves because our technology has advanced more rapidly than our wisdom, or succumb to a natural disaster because our technology has not advanced quickly enough.When he coined the term in the 1970s, Sagan had a very current existential threat in mind: the combined nuclear arsenals of the US and USSR, approaching about 50,000 weapons at the time. Stockpiling more weapons, Sagan said on several occasions, was like collecting one match after another, not realizing that we’re surrounded by gasoline fumes. Though now that number is on the decline, the danger from these weapons is still grave, and stories of close calls over the decades tell us how lucky we’ve been.But luck doesn’t keep a species around indefinitely. Sagan was deeply worried that we might not mature fast enough to escape destruction by our own hand; in his Cosmos TV series, he imagined an ET encyclopedia of planets listing our species with a 40 percent survival probability over the next 100 years. But he also was also a situational optimist, confident that expanding our knowledge of the cosmos, and someday learning that we’re not alone in it, could make us a lot wiser – and improve our chances of survival considerably.“A single message from space will show that it is possible to live through technological adolescence," Sagan wrote in Smithsonian Magazine in 1978. “It is possible that the future of human civilization depends on the receipt of interstellar messages."Putting a Number on ItOther scientists have tried to define our level of advancement in a semi-quantitative way using what’s called the Kardashev Scale, which considers a civilization’s energy consumption. The scale is named after Soviet astronomer Nikolai Kardashev, who 50 years ago proposed an extraterrestrial civilization scale consisting of three types of energy-harnessing capability.A Type I civilization uses and controls energy on a planetary magnitude. It harnesses and consumes the amount of energy that reaches its home planet from its star. We would be a Type I civilization if we converted all of the solar energy hitting the Earth from space into power for human use, of if we generated and consumed that amount of power through other means.Currently, we harness a substantial fraction of that amount, roughly 75 percent, so we are not yet a Type I civilization. Kardashev did not include a Type 0 in his original scale, but that’s what we are until we pass the Type I energy threshold, which we’re predicted to do within about 100 years.Obviously there’s a lot of wiggle room in the designations, since the home planets of other civilizations won’t receive the same amount of starlight energy as Earth does. If Earth were much smaller, or further from the Sun, or if the Sun were less luminous, we could have passed the Type I energy threshold already, not because we’d be any more advanced. Similarly, on a bigger planet, closer to the Sun, our civilization would have a longer way to go to be Type I.So the energy consumption is merely a guideline, and, importantly, there are other factors. Although we’re approaching Type I energy consumption, we still get our energy largely through dirty, non-renewable means. Furthermore, controlling energy on a planetary magnitude also means controlling the various forces of the planet’s atmosphere, crust, mantle, and core. A Type I civilization can control the weather, influence the climate, and prevent earthquakes and volcanic eruptions, in fact harnessing their power safely. They also are competent in interplanetary travel. Using Star Trek for comparison, humans in that fictional future are well able to do all of these things.Moving On UpFar more advanced than Type I, a Type II civilization controls the energy of its star, which means they use energy at a magnitude billions of times higher than Type I. They can collect a star’s energy not merely from solar panels on planets, moons, or in space, but they might build a structure, called a Dyson Sphere, that partly or completely surrounds their own or another star to harness the bulk of its energy.And they’re more mobile. They have interstellar travel that has allowed them to colonize hundreds of star systems. They could avoid extinction from a supernova, or other events that destroy entire star systems, by simply moving away. Star Trek’s humans are interstellar travelers and colonizers, of course, which means they are more than a Type I civilization. But on a ST-The Next Generation episode the Enterprise finds an ancient Dyson structure along with an earlier starship that had crash-landed on it, all suggesting that humans have not yet reached this level of star-harnessing capability.Furthermore, Type II civilizations can mine and move stars, manipulate black holes, and induce or slow a supernova. This suggests that Star Trek (at least the United Federation of Planets on Star Trek) is somewhere between a Type I and Type II civilization. However certain aliens on the show, outside of the Federation, have definitely reached Type II.Type III, the most advanced civilization Kardashev described, had powers of a galactic magnitude. Its inhabitants have transgalactic and intergalactic travel and work with energy levels on the magnitude of a galaxy or cluster of galaxies. So they could survive just about anything short of the end of the universe.Kardashev did not bring his scale beyond Type III, but other people have extended the scale as high as a Type VII. On these higher levels, proposed capabilities do not always coincide exactly from one person’s scale to another, but they all imagine beings with ever-increasing capabilities, such as moving through multiple parallel universes and dimensions, ultimately being able to manipulate all existence. The Star Trek character Q and his people might fit into one of these higher civilization types.The higher you go, the more the members of the civilization (whether biological or more sentient machines by that point) are effectively deities, which in a way turns the theism-atheism paradigm on its side, inside out, or disintegrates it completely, putting the mortal-to-deity difference onto a sliding scale. The gods lived in the clouds in the minds of our ancestors, and today we cross those clouds routinely. To cave people, we would be gods, despite our vulnerabilities.The Outlook for HumanityWe sure are vulnerable. But we’ll be significantly less vulnerable once we can safely call ourselves a Type I civilization. What is our progress to this end?Well, as stated earlier, we’re about 75 percent there in terms of energy. The second aspect, survival, is more qualitative, but there are positive signs. Though we haven’t perfected interplanetary travel, we do have it. We send probes around our star system (and we even have a few on their way into interstellar space). Transporting humans between planets is merely an engineering issue, something we could have done already with sufficient effort and money. Without necessitating any major new discovery, we could build colonies in space near the Earth and moon or slightly further away, keeping at least a few thousand people safe from a planetary disaster, and that could be reality in a matter of decades.We’re making a little progress with earthquakes, at least learning how to detect them before they strike to give people some warning, although we can’t intervene to prevent them yet. We’re monitoring near-Earth objects like asteroids and at least discussing programs that would be directed at diverting any dangerous body from hitting Earth. And, amazingly, earlier this year researchers in Iceland drilled into magma that was intruding into the Earth’s crust, constituting a major breakthrough toward an ability to harness volcano power. Along with that would come an ability to siphon off the accumulating magma pressure that causes volcanic eruptions.So our capabilities hint that we are going in the direction of a Type I civilization. Will we get there fast enough? Nobody can say for sure, but it does look hopeful. And when we get there, there will still be quite a lot left to invent.Image by Vadim Sadovski / Shutterstock

Picture 1

A new map places the Milky Way (black dot) within a large supercluster of galaxi

A new map places the Milky Way (black dot) within a large supercluster of galaxies (white dots) by tracing the gravitational pull of galaxies toward one another. White filaments reveal the paths of galaxies moving toward a gravitational center in the new supercluster, dubbed "Laniakea." (Blue, low galaxy density; green, intermediate; red, high.) It's not the first time that scientists have mapped the Milky Way's neighborhood, but previous maps couldn't identify which galaxies were bound together by gravity to form the Milky Way's supercluster.Tully and his colleagues have defined Laniakea's boundaries and galactic inhabitants by looking at how galaxies move through space. The team used a measurement called "peculiar motion," which takes a galaxy's total movement and subtracts the motion contributed by the expansion of the universe.From there, scientists can generate flow lines that indicate how galaxies are moving, revealing the gravitational center that is drawing them in. These attractors control the behavior of member galaxies, forming the cores of superclusters.But determining the peculiar motions that point toward these cores is tricky."It's a really difficult observation to make, per galaxy," says David Schlegel, a physicist at Lawrence Berkeley National Laboratory in California. Schlegel, who is working on a project that will map 25 million galaxies, spent some time tackling similar maps in graduate school."A lot of people actually worked on it, but it was such a mess that essentially all of them gave up," he says. "This group, Tully in particular, has persevered and kept working on it."After studying the peculiar motions of 8,000 galaxies, Tully and his colleagues could identify which gravitational center controlled the Milky Way and its galactic neighbors. They used that information to define the extent of the supercluster. Simply put, galaxies whose motion is controlled by Laniakea's Great Attractor—located in the direction of the constellation Centaurus—are part of the Laniakea supercluster.Galaxies that are being pulled toward a different attractor are in a different supercluster (the next one over is called Perseus-Pisces), even if they're right next to each other in the sky."We're finding the edges, the boundaries," Tully says. "It really is similar to the idea of watersheds on the surface of the planet. The edges of watersheds are pretty obvious when you're in the Rocky Mountains, but it's a lot less obvious if you're on really flat land. Still, the water knows which way to go."Within the supercluster, galaxies are strung like beads on cosmic strings, each anchored to the Great Attractor. The Milky Way is at the fringe of one of those strings, perched on the edge of the Local Void—an area where, as the name suggests, there isn't much to be found.These kinds of large-scale strings and voids are common throughout the universe. But Tully notes one surprise that emerged while mapping Laniakea: The supercluster is being yanked on by an even larger assemblage of galaxies, called the Shapley Concentration."It's a really big thing, and we're being pulled toward it. But we don't have enough information yet to find the Shapley Concentration's outline," Tully says. "We might be part of something even bigger."Follow Nadia Drake on Twitter.

Picture 1

Cosmic Megastructures - Could We Build a Ringworld?

An artist's impression of a Ringworld.Could We Build a Ringworld?In our cosmic megastructures series, PopMech explores some of the key engineering and design challenges in constructing gigantic structures for use by humankind in space. Today: a Niven Ring or Ringworld, an enormous slice of real estate encircling a star.Name: Niven Ring, or Ringworld Named for: Larry Niven's 1970 novel Ringworld and its sequels.Selected Science Fiction Portrayals: Besides those featured in Niven's novels, similar but smaller structures, called Halos, appear in the Halo video game and media franchise. Also, the Orbitals of Iain M. Banks' Culture novels and short stories. Someday, when humankind outgrows planet Earth, we might aim to build a habitat so vast we could never overpopulate it. Sci-fi author Larry Niven conjured up such a megastructure for his award-winning 1970 book Ringworld. Niven imagined a ring with a radius of 93 million miles—the sun-Earth distance—with the sun placed at the center. The ring' would reach 600 million miles across and a million miles tall. The vast landscape could comfortably support perhaps trillions of humans (or another similarly ambitious, technologically advanced race). "The thing is roomy enough: three million times the area of the Earth. It will be some time before anyone complains about the crowding," Niven wrote in a 1974 essay entitled "Bigger Than Worlds." Niven figured a Ringworld would have a thickness of a few thousand feet, and require raw materials with a mass equal to that of Jupiter. Mountain "walls" a thousand miles high would line each rim, preventing the atmosphere from leaking into space. The inner surface could be sculpted like Earth's surface—full of great (though shallow) oceans, soaring mountains, and prodigious farmland—or whatever its builders desired. Could a Ringworld ever be made? While the concept does not bend physics past the point of breaking, it would require truly extreme engineering and an utter mastery of the forces of nature. According to Anders Sandberg, a research fellow at Oxford University's Future of Humanity Institute who has studied megastructure concepts, a Ringworld "is an amazingly large structure that's way beyond what we can normally imagine, but it's also deeply problematic." Establishing Gravity When imagining the ring, Niven had started with the concept of a Dyson Sphere, an idea explored by physicist Freeman Dyson a decade prior to Ringworld's publication. In its usual science fiction presentation as a "ping pong ball around a star," Niven said, a solid Dyson Sphere lacks gravity. Rotating the sphere would create gravity via centrifugal force, but only the equatorial regions would reap the benefits. "So," Niven tells PM, "I just used the equator." A Niven Ring, then, can be thought of as the slice of the habitat-friendly section of a Dyson Sphere. To get Earth-like gravity, the Ringworld would need to spin at nearly three million miles per hour. Very fast, to be sure. But in a frictionless space environment, it could be doable. The ring could work up to that speed over time and then maintain it with little additional thrusting. Managing the Sun Although it would be equidistant from its central star at all points, the Ringworld would not, in fact, be gravitationally stable. Any perturbing force from, say, a meteorite strike or a close encounter with another star could throw the Ringworld out of attractive equilibrium and onto on a cataclysmic collision course. "A Ringworld will tend to drift off whenever it gets a chance," Sandberg says. Readers of the original Ringworld, including students at the Massachusetts Institute of Technology, wrote letters to Niven about this and other technical issues related to the megastructure. Niven addressed the problem in the 1980 sequel, The Ringworld Engineers. Large rockets placed along the Ringworld's edge would have to periodically fire to keep the megastructure properly situated away from its sun. For residents of the Ringworld, that sun would always be directly overhead at a perpetual high noon. To create a day–night cycle and save plant life from frying, Niven envisioned a set of "shadow squares" around the sun at about Mercury's distance from Earth. The parts of Ringworld between the squares would experience roving daylight, while the eclipsed portions would rest in the shade. The whole length of the Ringworld would be checkered light and dark. "The builders, if they're something like humans, will want day and night because they will want an imitation of their own planet," Niven says. The Ringworld's arch, as seen from a great ocean. (Photo Credit: Tim Russell courtesy of larryniven.net) Solar panels on the immense shadows squares could collect energy to power the structure. Energy could be beamed via laser from the squares to receiver stations along the Ringworld's rim, away from inhabited "land." Lasers would also come in handy for vaporizing asteroids or comets that might smack into the Ringworld. As a big, thin target, a Ringworld would be devastated by a high-speed impactor. A hole explosively punched through it could let the atmosphere eventually drain out. Impossible Strength? Material strength is a potential showstopper for a Ringworld. Because of its bulk, the megastructure would be subjected to mechanical stresses violent enough to break any known physical molecular bonds. "The Ring needs to be superstrong," Sandberg says. "Mere molecular bonds will not do." For super-strength, the best would bet would be, well, the "strong" force. This force is the grippiest of the four described forces of nature. It has 137 times the strength of electromagnetism, a million times that of the weak force, and duodecillion (1039) times that of puny gravity. Yet it operates only on the femtometer scale of the atomic nucleus. The strong force crams like-charged protons into an atomic nucleus. "The electromagnetic repulsion between the [protons] would love to split them apart, but you have the strong nuclear force gluing them together," Sandberg says. In our present technological state, we are quite good at manipulating electromagnetism and dealing with gravity. If we could learn to wield the strong force, it would suffice for the structural integrity of a Niven Ring. The strong force is medicated by particles called gluons; if we could rip apart quarks and use their "glue" beyond the nucleic scale, all sorts of architectural and engineering feats would become possible. "We have no clue how to control the strong nuclear force," Sandberg says, "but it could be that advanced civilizations know how." Niven avoided this can of worms in his stories by inventing a magic, milky-gray material called "scrith." He envisioned it being somehow producible by transmutation of elements, via high-tech fusion. Transmutation of elements, such as the predominant hydrogen and helium available within Jupiter and Saturn, would be necessary anyhow for enough (non-scrith) material to build the megastructure. From Worlds to a Ringworld As for the actual Ringworld building process, Niven sketched it as follows. The solar system's planets would be dismantled by machines and reformatted into disc-shaped plates. Cables would link these plates and, in time, the plates would be pulled together to form a ring. Given the miracle materials and advanced element transmutation required for a colossal Ringworld, smaller, other ringlike habitats make far more sense from an engineering perspective. The "Halos" in the eponymous video games, for instance, are about 10,000 miles in diameter. They could plausibly be made of steel. Bishop Rings, another proposed ring megastructure by a nanotechnologist, Forrest Bishop, would be a "mere" 1,200 miles in diameter and made of ultra-stiff carbon nanotubes. These rings would not encircle a star or planet, but could nestle stably in a Lagrangian point, where the gravitational pull from a planet matches that of the sun. A ship swoops toward a Halo ring, under construction. The Ark, a construction and control station for Halos, is seen in the bottom of the image. (Photo Credit: commorancy/Flickr/Wikipedia) Finally, the rationale for ever pursuing a Ringworld is questionable in the first place. The civilization's rulers would be placing an awful lot of eggs in one basket. A catastrophic failure somewhere on the Ring, perhaps of a stabilizing thruster, could doom the entire venture, and its trillion of inhabitants. (Niven explores this kind of crisis in The Ringworld Engineers.) Niven himself points out that Ringworlds are really for telling a good story rather than offering a prescription for an Earth whose population has runneth over. "Even if we go for big stuff, there is no reason to build a Ringworld," Niven says, "when we could build a million [other] things and put them in orbit, rather than in orbit around the sun."

Picture 1

We Use DNA to Predict Our Medical Futures, But it May Have More to Say About the Past

Every day our DNA breaks a little. Special enzymes keep our genome intact while we’re alive, but after death, once the oxygen runs out, there is no more repair. Chemical damage accumulates, and decomposition brings its own kind of collapse: membranes dissolve, enzymes leak, and bacteria multiply. How long until DNA disappears altogether? Since the delicate molecule was discovered, most scientists had assumed that the DNA of the dead was rapidly and irretrievably lost. When Svante Pääbo, now the director of the Max Planck Institute for Evolutionary Anthropology in Germany, first considered the question more than three decades ago, he dared to wonder if it might last beyond a few days or weeks. But Pääbo and other scientists have now shown that if only a few of the trillions of cells in a body escape destruction, a genome may survive for tens of thousands of years.In his first book, Neanderthal Man: In Search of Lost Genomes, Pääbo logs the genesis of one of the most groundbreaking scientific projects in the history of the human race: sequencing the genome of a Neanderthal, a human-like creature who lived until about 40,000 years ago. Pääbo’s tale is part hero’s journey and part guidebook to shattering scientific paradigms. He began dreaming about the ancients on a childhood trip to Egypt from his native Sweden. When he grew up, he attended medical school and studied molecular biology, but the romance of the past never faded. As a young researcher, he tried to mummify a calf liver in a lab oven and then extract DNA from it. Most of Pääbo’s advisors saw ancient DNA as a “quaint hobby," but he persisted through years of disappointing results, patiently awaiting technological innovation that would make the work fruitful. All the while, Pääbo became adept at recruiting researchers, luring funding, generating publicity, and finding ancient bones.Eventually, his determination paid off: in 1996, he led the effort to sequence part of the Neanderthal mitochondrial genome. (Mitochondria, which serve as cells’ energy packs, appear to be remnants of an ancient single-celled organism, and they have their own DNA, which children inherit from their mothers. This DNA is simpler to read than the full human genome.) Finally, in 2010, Pääbo and his colleagues published the full Neanderthal genome.That may have been one of the greatest feats of modern biology, yet it is also part of a much bigger story about the extraordinary utility of DNA. For a long time, we have seen the genome as a tool for predicting the future. Do we have the mutation for Huntington’s? Are we predisposed to diabetes? But it may have even more to tell us about the past: about distant events and about the network of lives, loves, and decisions that connects them.EmpiresLong before research on ancient DNA took off, Luigi Luca Cavalli-Sforza made the first attempt to rebuild the history of the world by comparing the distribution of traits in different living populations. He started with blood types; much later, his popular 2001 book Genes, Peoples, and Languages explored demographic history via languages and genes. Big historical arcs can also be inferred from the DNA of living people, such as the fact that all non-Africans descend from a small band of humans that left Africa 60,000 years ago. The current distribution across Eurasia of a certain Y chromosome—which fathers pass to their sons—rather neatly traces the outline of the Mongolian Empire, leading researchers to propose that it comes from Genghis Khan, who pillaged and raped his way across the continent in the 13th century.But in the last few years, geneticists have found ways to explore not just big events but also the dynamics of populations through time. A 2014 study used the DNA of ancient farmers and hunter-gatherers from Europe to investigate an old question: Did farming sweep across Europe and become adopted by the resident hunter-gatherers, or did farmers sweep across the continent and replace the hunter-gatherers? The researchers sampled ancient individuals who were identified as either farmers or hunters, depending on how they were buried and what goods were buried with them. A significant difference between the DNA of the two groups was found, suggesting that even though there may have been some flow of hunter-­gatherer DNA into the farmers’ gene pool, for the most part the farmers replaced the hunter-gatherers.Looking at more recent history, Peter Ralph and Graham Coop compared small segments of the genome across Europe and found that any two modern Europeans who lived in neighboring populations, such as Belgium and Germany, shared between two and 12 ancestors over the previous 1,500 years. They identified tantalizing variations as well. Most of the common ancestors of Italians seem to have lived around 2,500 years ago, dating to the time of the Roman Republic, which preceded the Roman Empire. Though modern Italians share ancestors within the last 2,500 years, they share far fewer of them than other Europeans share with their own countrymen. In fact, Italians from different regions of Italy today have about the same number of ancestors in common with one another as they have with people from other countries. The genome reflects the fact that until the 19th century Italy was a group of small states, not the larger country we know today.In a very short amount of time, the genomes of ancient people have ­facilitated a new kind of population genetics. It reveals phenomena that we have no other way of knowing about.Significant events in British history suggest that the genetics of Wales and some remote parts of Scotland should be different from genetics in the rest of Britain, and indeed, a standard population analysis on British people separates these groups out. But this year scientists led by Peter Donnelly at Oxford uncovered a more fine-grained relationship between genetics and history. By tracking subtle patterns across the genomes of modern Britons whose ancestors lived in particular rural areas, they found at least 17 distinct clusters that probably reflect different groups in the historic population of Britain. This work could help explain what happened during the Dark Ages, when no written records were made—for example, how much ancient British DNA was swamped by the invading Saxons of the fifth century.The distribution of certain genes in modern populations tells us about cultural events and choices, too: after some groups decided to drink the milk of other mammals, they evolved the ability to tolerate lactose. The descendants of groups that didn’t make this choice don’t tolerate lactose well even today.MysteriesAnalyzing the DNA of the living is much easier than analyzing ancient DNA, which is always vulnerable to contamination. The first analyses of Neanderthal mitochondrial DNA were performed in an isolated lab that was irradiated with UV light each night to destroy DNA carried in on dust. Researchers wore face shields, sterile gloves, and other gear, and if they entered another lab, Pääbo would not allow them back that day. Still, controlling contamination only took Pääbo’s team to the starting line. The real revolution in analysis of ancient DNA came in the late 1990s, with ­second-generation DNA sequencing techniques. Pääbo replaced Sanger sequencing, invented in the 1970s, with a technique called pyrosequencing, which meant that instead of sequencing 96 fragments of ancient DNA at a time, he could sequence hundreds of thousands.Such breakthroughs made it possible to answer one of the longest-running questions about Neanderthals: did they mate with humans? There was scant evidence that they had, and Pääbo himself believed such a union was unlikely because he had found no trace of Neanderthal genetics in human mitochondrial DNA. He suspected that humans and Neanderthals were biologically incompatible. But now that the full Neanderthal genome has been sequenced, we can see that 1 to 3 percent of the genome of non-Africans living today contains variations, known as alleles, that apparently originated with Neanderthals. That indicates that humans and Neanderthals mated and had children, and that those children’s children eventually led to many of us. The fact that sub-Saharan Africans do not carry the same Neanderthal DNA suggests that Neanderthal-human hybrids were born just as humans were expanding out of Africa 60,000 years ago and before they colonized the rest of the world. In addition, the way Neanderthal alleles are distributed in the human genome tells us about the forces that shaped lives long ago, perhaps helping the earliest non-Africans adapt to colder, darker regions. Some parts of the genome with a high frequency of Neanderthal variants affect hair and skin color, and the variants probably made the first Eurasians lighter-skinned than their African ancestors.Ancient DNA will almost certainly complicate other hypotheses, like the ­African-origin story, with its single migratory human band. Ancient DNA also reveals phenomena that we have no other way of knowing about. When Pääbo and colleagues extracted DNA from a few tiny bones and a couple of teeth found in a cave in the Altai Mountains in Siberia, they discovered an entirely new sister group, the Denisovans. Indigenous Australians, Melanesians, and some groups in Asia may have up to 5 percent Denisovan DNA, in addition to their Neanderthal DNA.In a very short amount of time, a number of ancients have been sequenced by teams all over the world, and the growing library of their genomes has facilitated a new kind of population genetics. What is it that DNA won’t be able to tell us about the past? It may all come down to what happened in the first moments or days after someone’s death. If, for some reason, cells dry out quickly—if you die in a desert or a dry cave, if you are frozen or mummified—post-mortem damage to DNA can be halted, but it may never be possible to sequence DNA from remains found in wet, tropical climates. Still, even working with only the scattered remains that we have found so far, we keep gaining insights into ancient history. One of the remaining mysteries, Pääbo observes, is why modern humans, unlike their archaic cousins, spread all over the globe and dramatically reshaped the environment. What made us different? The answer, he believes, lies waiting in the ancient genomes we have already sequenced.There is some irony in the fact that Pääbo’s answer will have to wait until we get more skillful at reading our own genome. We are at the very beginning stages of understanding how the human genome works, and it is only once we know ourselves better that we will be able to see what we had in common with Neanderthals and what is truly different.Christine Kenneally is the author of The Invisible History of the Human Race, to be published in October.

Picture 1

How Modern Medicine Is Reinventing Death

"Death travelers" are bringing back stories of life beyond death. Thanks to CPR, people can be revived after being dead for up to an hour. Author Judy Bachrach calls them "death travelers" in her new book.Photograph by Time Life Pictures/GettySimon Worrallfor National GeographicPublished September 3, 2014They can fly through walls or circle the planets, turn into pure light or meet long-dead relatives. Many have blissful experiences of universal love. Most do not want to return to the living. When they do, they're often endowed with special powers: They can predict the future or intuit people's thoughts.Many end up unhappy and divorced, rejected by their loved ones or colleagues, burdened with a knowledge they often dare not share. They are the "death travelers."If this sounds like the movie Flatliners or a science fiction novel by J. G. Ballard, it isn't. These are the testimonies of people who have had near death experiences (NDEs) and returned from the other side to tell the tale.Journalist Judy Bachrach decided to listen to their stories, and on the way cure her own terror of death.Here she talks about how advances in medicine are enabling us to raise the dead, why the scientific and religious communities are hostile to the idea of NDEs, and how a British traffic controller returned from the dead with the ability to predict the stock market.Your book,Glimpsing Heaven: The Stories and Science of Life After Death, opens with you volunteering to work in a hospice. Why?The person who put the idea in my head was former First Lady Barbara Bush, whose own daughter had died in hospice at the age of four. One of my best friends was dying of cancer. We were both at the time 32 [years old], and I couldn't get over it. I was terrified of death, and I was terrified of her dying. So I decided to start working in a hospice to get over my terror of death.Until the 20th century, death was determined by holding a mirror to a patient's mouth. If it didn't mist over, the person was dead. We now live in what you call the "age of Lazarus." Can you explain?Everybody who's been revived by CPR, cardiopulmonary resuscitation—and there are more and more of us—is a formerly dead person. We walk every single day among the formerly dead. Death is no longer simply the cessation of breath or heartbeat or even brain stem activity. These days people can be dead for up to an hour and come back among us and have memories. I call them "death travelers" in the book.One scientist you spoke to suggests that NDEs may simply result from the brain shutting down, like a computer—that, for instance, the brilliant light often perceived at the end of a tunnel is caused by loss of blood or hypoxia, lack of oxygen. How do you counter these arguments?The problem with the lack of oxygen explanation is that when there is a lack of oxygen, our recollections are fuzzy and sometimes non-existent. The less oxygen you have, the less you remember. But the people who have died, and recall their death travels, describe things in a very clear, concise, and structured way. Lack of oxygen would mean you barely remember anything.Most death travelers don't want to return to the living, and when they do, they find it is a painful experience. Tell us about Tony Cicoria.Tony Cicoria is a neurosurgeon from upstate New York. He was like the rest of us once upon a time. He believed death was death, and that was the end. Then he got struck by lightning. He was on a picnic with his family, talking to his mother on the telephone, when a bolt of lightning hit the phone. The next thing he knew, he was lying on the ground saying to himself, "Oh, my God, I'm dead."The way he knew he was dead is because he saw his mother-in-law screaming at him. And he called out to her and said, "I'm here! I'm here!" But she didn't hear anything.Next he was traveling up a flight of steps without walking. He became a bolt of blue light and managed to go through a building. He flew through walls, and he saw his little kids having their faces painted. Right after that, he felt somebody thumping on his chest.A nurse who was in the vicinity was thumping on his chest. But he did not want to come back to life. Very much like other death travelers, he wanted to stay dead. Being dead is evidently a very interesting experience. And exciting.You suggest there is a difference between brain function and consciousness. Can you talk about that idea?This is an area where a lot more scientific research has to be done: that the brain is possibly, and I'm emphasizing the "possibly," not the only area of consciousness. That even when the brain is shut down, on certain occasions consciousness endures. One of the doctors I interviewed, a cardiologist in Holland, believes that consciousness may go on forever. So the postulate among some scientists is that the brain is not the only locus of thought, which is very interesting.You coin several new terms in the book. What's a Galileo?I call the scientists who are involved in research into death travel "Galileos" because, like Galileo himself, who was persecuted by the Inquisition for explaining his theories about the universe, scientists involved in research into what occurs after death are also being persecuted. They're denied tenure. They're told that they're inferior scientists and doctors. They're mocked. Anthony Cicoria, the man who was struck by lightning, didn't tell any of his fellow surgeons about his experience for something like 20 years.Why do you think the scientific community is so hostile to the idea of NDEs?It's a really good question. I think the scientific community is very much like I used to be. Journalists tend not to be very religious, we tend not to be very credulous, and we tend to believe the worst possible scenario, which, in this case, is nothing. The scientific community is very materialistic. If you can't see it and you can't measure it, it doesn't exist.When I gave a speech at the NIH [National Institutes of Health], I talked with the top neurologist there. I said, "Are you doing research on what used to be called near death experiences?" He looked at me like I was crazy. He said, "Why? Does it cure anything?"The Christian Church is also not very keen on this area of inquiry. Why is that?I think that religion, very much like science, likes to rely on everything that's gone on before. If your grandfather believed something, then you want to believe it. If the scientists who came before you want to believe something, then you believe in it. Because the options for those who deviate are very scary.Most of the people I interviewed got divorced. That is not uncommon among death travelers. You come back and tell your husband or lover or wife what went on, and they look at you like you're nuts. It's a very scary thing to come back and say, "I remember what happened after death."The Christian Church, or the Jewish faith, whichever we're talking about, also have very specific views of what life after death should involve. Everybody I interviewed deviated from the traditional theological views. They didn't see angels necessarily. They don't float in heaven. It's not some happy-clappy area of the universe. It's far more complicated—and interesting—than that.One of the curious facts I discovered reading your book is that women are far less optimistic about their chances of going to heaven than men are. Why is that?This was told to me by a monk who died by drowning and then returned. Obviously, he'd had a good deal of experience with people confiding in him and confessing. I think it's because women are very self-critical. We're very hard on ourselves. Nothing is ever good enough about us. We're not smart enough. We're not beautiful enough. Look at what we do to our bodies and our faces in the name of perfection! And I think that applies to our chances of getting, if you will, into heaven. For her new book, journalist Judy Bachrach collected the testimonies of people who had near death experiences and returned to tell the tale. Photograph Courtesy of National Geographic BooksWhy is it important for you to believe that there is life after death?It was not important for me, at all, to believe. I'm a journalist. I don't go around thinking, "I really hope there's life after death." Indeed, at the beginning I was the opposite—I didn't want to believe. Yes, death was a source of terror. For me, the worst thing that could happen was nothingness. I would have far preferred to hear that Satan was waiting for me than to learn that there was nothing. But I was absolutely positive that there was nothing after death—that the curtain descends, and that's it. Act III. It's over. The stage is black.And when I first ventured into this strange area of research, I was pretty sure, just as you said, that it was all the result of oxygen deprivation and that these were hallucinations. It was only after I discovered that it can't be the result of oxygen deprivation, and these were not hallucinations, that I realized I had to change my views. That's a very difficult thing to do, particularly when you're past adolescence. But every bit of evidence, every single person I interviewed, forced me to change my views. It was something I did quite unwillingly and with a good deal of skepticism.What I tried to do, as a journalist, was simply record what these people say happened. All I know is what I've reported, which is, when you die, that is not the end. Stuff goes on. That, to me, is weird. But it's true.Did engaging with this research make you want to die?No! Nothing makes me want to die! But it did make me less fearful of dying. It was a long process, though. After the first 20 or 30 interviews, I was still terrified of death. All these people were telling me stuff that I never believed could happen. But gradually I came to accept that what they said was true. So I'm a little less terrified of death now.You say that having an NDE often invests people with special powers. Tell us about the British air traffic controller.[Laughs] The British air traffic controller makes me laugh. He told a person I interviewed, a British neuropsychiatrist named Dr. Fenwick, that he had a death experience. Oddly enough, as a result of this death experience, he became terrific at picking and choosing stocks. [Laughs]The psychiatrist goes, "Uh-huh." The guy says, "Yeah, you really should invest in British Telecom."Dr. Fenwick says, "Uh, yeah. Right." And of course the stock soars right after that!Usually these powers involve perceptual abilities, though, [such as] the ability to know what other people are thinking, the ability know what's going to happen next. So they're usually less materialistic than this gentleman's powers. [Laughs] But, hey, whatever floats your boat.NDEs are, surely, not the same as a complete death experience. These are generally short episodes not lasting more than an hour and often in hospital settings. No one, as far as I know, has returned from the dead after a long period of time and told us about it. Do we know any more than we did before about what will actually happen when we die?What's happening now is revolutionary. If you'd told somebody a hundred years ago that they could die for an hour and come back and tell you what happened, that would have been in the realm of theology or philosophy. But now it's in the realm of the real world.It's absolutely true that we don't know what happens, say, after six days being dead. All we know now—and that's one of the reasons I think it's important for scientists to investigate far more—is what happens up to an hour.How did your friends and peers in the journalistic world react to you writing this book?It depends who they are. Some of them looked at me like, "Oh, OK. You're nuts. I never really thought you were before. But now I know you are."Others, because National Geographic is publishing the book, said, "Oh, National Geographic! It must be true then." [Laughs] My religious journalist friends said, "Thank God you're doing it. You were always such a skeptic and a cynic."I have to say that I fall into none of those categories. I'm just a journalist doing what journalists do. I'm interviewing people and trying to find out what is true.After writing this book, can you say with any more certainty what death is?Yes, I can. I can say that death is an adventure, which to me is the oddest thing in the world. It takes you from this Earth, this ordinary Earth, into extraordinary places.One of the experiences I describe is of the renowned psychologist Carl Jung, who died when he had a heart attack in his 60s. He was ultimately revived, and came back describing, in great detail, how he had seen the universe.One of the people I interviewed had a similar experience. And that shocked the hell out of me because that's the kind of experience I would love to have. Like an astronaut's delight. You're up there. You can move toward planets or away from planets. You can see the Earth. It's gorgeous. It's interesting. And it doesn't cost a thing.Read other interesting stories in National Geographic's Book Talk series.

Picture 1

Time Travel Simulation Resolves “Grandfather Paradox”

What would happen to you if you went back in time and killed your grandfather? A model using photons reveals that quantum mechanics can solve the quandary—and even foil quantum cryptography Sep 2, 2014 By Lee BillingsOn June 28, 2009, the world-famous physicist Stephen Hawking threw a party at the University of Cambridge, complete with balloons, hors d'oeuvres and iced champagne. Everyone was invited but no one showed up. Hawking had expected as much, because he only sent out invitations after his party had concluded. It was, he said, "a welcome reception for future time travelers," a tongue-in-cheek experiment to reinforce his 1992 conjecture that travel into the past is effectively impossible.But Hawking may be on the wrong side of history. Recent experiments offer tentative support for time travel's feasibility—at least from a mathematical perspective. The study cuts to the core of our understanding of the universe, and the resolution of the possibility of time travel, far from being a topic worthy only of science fiction, would have profound implications for fundamental physics as well as for practical applications such as quantum cryptography and computing.Closed timelike curves The source of time travel speculation lies in the fact that our best physical theories seem to contain no prohibitions on traveling backward through time. The feat should be possible based on Einstein's theory of general relativity, which describes gravity as the warping of spacetime by energy and matter. An extremely powerful gravitational field, such as that produced by a spinning black hole, could in principle profoundly warp the fabric of existence so that spacetime bends back on itself. This would create a "closed timelike curve," or CTC, a loop that could be traversed to travel back in time.Hawking and many other physicists find CTCs abhorrent, because any macroscopic object traveling through one would inevitably create paradoxes where cause and effect break down. In a model proposed by the theorist David Deutsch in 1991, however, the paradoxes created by CTCs could be avoided at the quantum scale because of the behavior of fundamental particles, which follow only the fuzzy rules of probability rather than strict determinism. "It's intriguing that you've got general relativity predicting these paradoxes, but then you consider them in quantum mechanical terms and the paradoxes go away," says University of Queensland physicist Tim Ralph. "It makes you wonder whether this is important in terms of formulating a theory that unifies general relativity with quantum mechanics."Experimenting with a curve Recently Ralph and his PhD student Martin Ringbauer led a team that experimentally simulated Deutsch's model of CTCs for the very first time, testing and confirming many aspects of the two-decades-old theory. Their findings are published in Nature Communications. Much of their simulation revolved around investigating how Deutsch's model deals with the “grandfather paradox," a hypothetical scenario in which someone uses a CTC to travel back through time to murder her own grandfather, thus preventing her own later birth. (Scientific American is part of Nature Publishing Group.)Deutsch's quantum solution to the grandfather paradox works something like this:Instead of a human being traversing a CTC to kill her ancestor, imagine that a fundamental particle goes back in time to flip a switch on the particle-generating machine that created it. If the particle flips the switch, the machine emits a particle—the particle—back into the CTC; if the switch isn't flipped, the machine emits nothing. In this scenario there is no a priori deterministic certainty to the particle's emission, only a distribution of probabilities. Deutsch's insight was to postulate self-consistency in the quantum realm, to insist that any particle entering one end of a CTC must emerge at the other end with identical properties. Therefore, a particle emitted by the machine with a probability of one half would enter the CTC and come out the other end to flip the switch with a probability of one half, imbuing itself at birth with a probability of one half of going back to flip the switch.If the particle were a person, she would be born with a one-half probability of killing her grandfather, giving her grandfather a one-half probability of escaping death at her hands—good enough in probabilistic terms to close the causative loop and escape the paradox.Strange though it may be, this solution is in keeping with the known laws of quantum mechanics.In their new simulation Ralph, Ringbauer and their colleagues studied Deutsch's model using interactions between pairs of polarized photons within a quantum system that they argue is mathematically equivalent to a single photon traversing a CTC. "We encode their polarization so that the second one acts as kind of a past incarnation of the first," Ringbauer says. So instead of sending a person through a time loop, they created a stunt double of the person and ran him through a time-loop simulator to see if the doppelganger emerging from a CTC exactly resembled the original person as he was in that moment in the past.By measuring the polarization states of the second photon after its interaction with the first, across multiple trials the team successfully demonstrated Deutsch's self-consistency in action. "The state we got at our output, the second photon at the simulated exit of the CTC, was the same as that of our input, the first encoded photon at the CTC entrance," Ralph says. "Of course, we're not really sending anything back in time but [the simulation] allows us to study weird evolutions normally not allowed in quantum mechanics."Those "weird evolutions" enabled by a CTC, Ringbauer notes, would have remarkable practical applications, such as breaking quantum-based cryptography through the cloning of the quantum states of fundamental particles. "If you can clone quantum states," he says, “you can violate the Heisenberg uncertainty principle," which comes in handy in quantum cryptography because the principle forbids simultaneously accurate measurements of certain kinds of paired variables, such as position and momentum. "But if you clone that system, you can measure one quantity in the first and the other quantity in the second, allowing you to decrypt an encoded message.""In the presence of CTCs, quantum mechanics allows one to perform very powerful information-processing tasks, much more than we believe classical or even normal quantum computers could do," says Todd Brun, a physicist at the University of Southern California who was not involved with the team's experiment. "If the Deutsch model is correct, then this experiment faithfully simulates what could be done with an actual CTC. But this experiment cannot test the Deutsch model itself; that could only be done with access to an actual CTC."Alternative reasoning Deutsch's model isn’t the only one around, however. In 2009 Seth Lloyd, a theorist at Massachusetts Institute of Technology, proposed an alternative, less radical model of CTCs that resolves the grandfather paradox using quantum teleportation and a technique called post-selection, rather than Deutsch's quantum self-consistency. With Canadian collaborators, Lloyd went on to perform successful laboratory simulations of his model in 2011. "Deutsch's theory has a weird effect of destroying correlations," Lloyd says. "That is, a time traveler who emerges from a Deutschian CTC enters a universe that has nothing to do with the one she exited in the future. By contrast, post-selected CTCs preserve correlations, so that the time traveler returns to the same universe that she remembers in the past."This property of Lloyd's model would make CTCs much less powerful for information processing, although still far superior to what computers could achieve in typical regions of spacetime. "The classes of problems our CTCs could help solve are roughly equivalent to finding needles in haystacks," Lloyd says. "But a computer in a Deutschian CTC could solve why haystacks exist in the first place."Lloyd, though, readily admits the speculative nature of CTCs. “I have no idea which model is really right. Probably both of them are wrong," he says. Of course, he adds, the other possibility is that Hawking is correct, “that CTCs simply don't and cannot exist." Time-travel party planners should save the champagne for themselves—their hoped-for future guests seem unlikely to arrive.More:The Quantum Physics of Time Travel (All-Access Subscribers Only) By David Deutsch and Michael LockwoodCan Quantum Bayesianism Fix the Paradoxes of Quantum Mechanics?Astrophysicist J. Richard Gott on Time Travel

Picture 1

Andromeda

Behold, the Andromeda galaxy!All photos by the Local Group Survey Team and T.A. Rector (University of Alaska–Anchorage)Nice, eh? Now, I had to shrink the image to fit the 1,440 pixel width of the blog (which is usually 590 pixels, but for you, dear BABloggees, this just cried out to be wider). How much did I shrink it? By a factor of 30. The full resolution JPG is a staggering 48,327 x 12,185 pixels, and weighs in at 340 Mb! The TIF version is 717 Mb, just so’s you know. This jaw-dropping image of Andromeda (also called M31) was put together by my friend, astronomer and astrophotographer extraordinaire Travis Rector. The data were from the Local Group Survey, a project to look at star-forming regions of nearby galaxies*. The mosaic uses 10 separate pointings of the Kitt Peak 4-meter Mayall telescope to cover the galaxy completely. The image used five filters: ultraviolet, blue, visible/yellow, infrared, and a narrow-band H-α. The first four highlight stars and dust in the galaxy, and the last picks out star-forming nebulae, which litter Andromeda. In fact, I decided to choke my bandwidth and grab the whole image just so I could show you a string of such nebulae in full resolution: Star-making factories in M31; note you can easily see individual stars. Yegads. This chain of nebulae is located at about the 4:00 position, along the outer spiral arm. See if you can find it. The resolution on this image is amazing, especially considering the full images covers about 3.5° of the sky—you could fit seven full Moons across this picture! The galaxy itself is a favorite. It’s the closest big spiral to the Milky Way, about 2.5 million light years away. As galaxies go, that’s our next-door neighbor … but be aware that it’s still 25 quintillion kilometers away! Or 15 quintillion miles, if you prefer (or 82 sextillion feet, or a cool septillion inches. I quite enjoy large numbers). M32, a satellite of Andromeda, is itself a full-blown (if dwarf) galaxy. Andromeda is visible to the unaided eye, and in fact this is a good time to see it; it’s up high enough to spot in the northeast around 10 p.m. this time of year for most northern hemisphere folks. Binoculars show it to be an elongated smudge, and a small ‘scope will start to show some details, like M32, a dwarf satellite galaxy of Andromeda (seen in the full image here to the upper left of the bigger galaxy’s center; another satellite galaxy, M110, is just outside the field of view below M31). I can’t tell you how many times I’ve viewed M31, from using my own eyes up to Hubble Space Telescope imaging. It’s rare to get both a deep, high-resolution image like this as well as cover the entire width of the galaxy. Travis did an amazing job here. As usual. And if you crave more photos of this cosmic beauty, why, I can help you there, too: Update (Sep. 3, 2014 at 22:30 UTC): To be clear about credit, I contacted the Local Group Galaxy Survey lead Phil Massey, who added: "The LGGS M31 images were taken with the Mayall 4-meter telescope at Kitt Peak National Observatory by a team of astronomers led by Phil Massey (Lowell Observatory), and included Knut Olsen, George Jacoby, Chris Smith (NOAO), Paul Hodge (University of Washington), and Wayne Schlingman (OSU); the survey and analysis was partially funded by the NSF." A Slate Plus Special Feature: The Slate Doctor Who Podcast: Episode 2 Join Phil Plait and Mac Rogers for a spoiler-filled discussion of "Into the Dalek." Try S+ free for two weeks.

Picture 1

Milky Way is on the outskirts of 'immeasurable heaven' supercluster

Milky Way Astronomers discover that our galaxy is a suburb of a supercluster of 100,000 large galaxies they have called LaniakeaIan Sample, science editorThe Guardian, Wednesday 3 September 2014 18.07 BSTJump to comments ()The Laniakea supercluster. Image: SDvision/Guardian In what amounts to a back-to-school gift for pupils with nerdier leanings, researchers have added a fresh line to the cosmic address of humanity. No longer will a standard home address followed by "the Earth, the solar system, the Milky Way, the universe" suffice for aficionados of the extended astronomical location system.The extra line places the Milky Way in a vast network of neighbouring galaxies or "supercluster" that forms a spectacular web of stars and planets stretching across 520m light years of our local patch of universe. Named Laniakea, meaning "immeasurable heaven" in Hawaiian, the supercluster contains 100,000 large galaxies that together have the mass of 100 million billion suns.Our home galaxy, the Milky Way, lies on the far outskirts of Laniakea near the border with another supercluster of galaxies named Perseus-Pisces. "When you look at it in three dimensions, is looks like a sphere that's been badly beaten up and we are over near the edge, being pulled towards the centre," said Brent Tully, an astronomer at the University of Hawaii in Honolulu.Astronomers have long known that just as the solar system is part of the Milky Way, so the Milky Way belongs to a cosmic structure that is much larger still. But their attempts to define the larger structure had been thwarted because it was impossible to work out where one cluster of galaxies ended and another began.Tully's team gathered measurements on the positions and movement of more than 8,000 galaxies and, after discounting the expansion of the universe, worked out which were being pulled towards us and which were being pulled away. This allowed the scientists to define superclusters of galaxies that all moved in the same direction (if you're reading this story on a mobile device, click here to watch a video explaining the research).The work published in Nature gives astronomers their first look at the vast group of galaxies to which the Milky Way belongs. A narrow arch of galaxies connects Laniakea to the neighbouring Perseus-Pisces supercluster, while two other superclusters called Shapley and Coma lie on the far side of our own.Tully said the research will help scientists understand why the Milky Way is hurtling through space at 600km a second towards the constellation of Centaurus. Part of the reason is the gravitational pull of other galaxies in our supercluster."But our whole supercluster is being pulled in the direction of this other supercluster, Shapley, though it remains to be seen if that's all that's going on," said Tully. The Laniakea supercluster. Image: SDvision Superclusters are the largest cosmic structures known to exist in the universe. Writing in an accompanying article, Elmo Tempel, an astronomer at the Tartu Observatory in Estonia, praised the name given to Earth's supercluster. "It is taken from the Hawaiian words lani, which means heaven, and akea, which means spacious or immeasurable. That is just the name one would expect for the whopping system that we live in."

Picture 1

Will superintelligent AIs be our doom?

Nick Bostrom says artificial intelligence poses an existential threat to humanity.Every morning Nick Bostrom wakes up, brushes his teeth, and gets to work thinking about how the human species may be wiped off the face of the earth. Bostrom, director of the Future of Humanity Institute at the University of Oxford, is an expert on existential threats to humanity. Of all the perils that make his list, though, he’s most concerned with the threat posed by artificial intelligence.Bostrom’s new book, Superintelligence: Paths, Dangers, Strategies (Oxford University Press), maps out scenarios in which humans create a “seed AI" that is smart enough to improve its own intelligence and skills and which goes on to take over the world. Bostrom discusses what might motivate such a machine and explains why its goals might be incompatible with the continued existence of human beings. (In one example, a factory AI is given the task of maximizing the production of paper clips. Once it becomes superintelligent, it proceeds to convert all available resources, including human bodies, into paper clips.) Bostrom’s book also runs through potential control strategies for an AI—and the reasons they might not work.In the passage from the book below, Bostrom imagines a scenario in which AI researchers, trying to proceed cautiously, test their creations in a controlled and limited “sandbox" environment.READ MORE

Picture 1

The Future of Robot Labor Is the Future of Capitalism | Motherboard

You’ve seen the headlines by now: The robots are coming, and they’re going to take our jobs. The future really doesn’t look so great for the average, human working stiff, since 47 percent of the world’s jobs are set to be automated in the next two decades, according to a recent and much-publicised University of Oxford study.Some see these developments in apocalyptic terms, with robot workers creating a new underclass of jobless humans, while others see it in a more hopeful light, claiming robots may instead lead us to a future where work isn’t necessary. But fretting over which jobs will be lost and which will be preserved doesn’t do much good. The thing is, robots entering the workplace isn’t even really about robots. The coming age of robot workers chiefly reflects a tension that’s been around since the first common lands were enclosed by landowners who declared them private property: that between labour and the owners of capital. The future of labour in the robot age has everything to do with capitalism.Image: Mixabest/WikimediaThe best way to understand how this all works and where it will go is to refer to the writings of the person who understood capitalism best—Karl Marx. In particular, to a little-known journal fragment published in his manuscript The Grundrisse called “The Fragment on Machines."Whether you love him, hate him, or just avoid him completely, Marx dedicated his life to understanding how capitalism works. He was obsessed with it. In “The Fragment," Marx grappled with what a fully automated capitalist society might mean for the worker in the future.According to Marx, automation that displaces workers in favour of machines that can produce more goods in less time is part and parcel of how capitalism operates. By developing fixed capital (machines), bosses can do away with much of the variable capital (workers) that saps their bottom line with pesky things like wages and short work days. He writes:The increase of the productive force of labour and the greatest possible negation of necessary labour is the necessary tendency of capital, as we have seen. The transformation of the means of labour into machinery is the realization of this tendency.Seen through this lens, robot workers are the rational end point of automation as it develops in a capitalist economy. The question of what happens to workers displaced by automation is an especially interesting line of inquiry because it points to a serious contradiction in capitalism, according to Marx:Capital itself is the moving contradiction, [in] that it presses to reduce labour time to a minimum, while it posits labour time, on the other side, as sole measure and source of wealth.In Marxist theory, capitalists create profit by extracting what’s called surplus value from workers—paying them less than what their time is worth and gaining the difference as profit after the commodity has been sold at market price, arrived at by metrics abstracted from the act of labour itself. So what happens when humans aren’t the ones working anymore? Curiously, Marx finds himself among the contemporary robotic utopianists in this regard.Once robots take over society’s productive forces, people will have more free time than ever before, which will “redound to the benefit of emancipated labour, and is the condition of its emancipation," Marx wrote. Humans, once freed from the bonds of soul-crushing capitalist labour, will develop new means of social thought and cooperation outside of the wage relation that frames most of our interactions under capitalism. In short, Marx claimed that automation would bring about the end of capitalism.In the automated world, precarious labour reigns.It’s a familiar sentiment that has gained new traction in recent years thanks to robots being in vogue, but we only have to look to the recent past to know that things didn’t exactly work out that way. Capitalism is very much alive and well, despite automation’s steady march towards ascendancy over the centuries. The reason is this: automation doesn’t disrupt capitalism. It’s an integral part of the system. What we understand as “work" has morphed to accommodate its advancement. There is no reason to assume that this will change just because automation is ramping up to sci-fi speed.To paraphrase John Tomlinson in his analysis of technology, speed, and capitalism in The Culture of Speed: The Coming of Immediacy, no idiom captures the spirit of capitalism better than “time is money". If machines ostensibly create more free time for humans by doing more work, capitalists must create new forms of work to make that time productive in order to continue capturing surplus value for themselves. As Marx wrote (forgive my reprinting of his problematic language):The most developed machinery thus forces the worker to work longer than the savage does, or than he himself did with the simplest, crudest tools [...] But the possessors of [the] surplus produce or capital... employ people upon something not directly and immediately productive, e.g. in the erection of machinery. So it goes on.“Not immediately productive" is the key phrase here. Just think of all the forms of work that have popped up since automation began to really take hold during the Industrial Revolution: service sector work, online work, part-time and otherwise low-paid work. You’re not producing anything while working haphazard hours as a cashier at Walmart, but you are creating value by selling what has already been built, often by machines.In the automated world, precarious labour reigns. Jobs that offer no stability, no satisfaction, no acceptable standard of living, and seem to take up all of our time by occupying so many scattered parcels of it are the norm. Franco “Bifo" Berardi, a philosopher of labour and technology, explained it thusly in his book Precarious Rhapsody, referring to the legions of over worked part-time or no-timers as the “precariat":The word ‘precariat’ generally stands for the area of work that is no longer definable by fixed rules relative to the labor relation, to salary and to the length of the working day [...] Capital no longer recruits people, but buys packets of time, separated from their interchangeable and occasional bearers [...] The time of work is fractalized, that is, reduced to minimal fragments that can be reassembled, and the fractalization makes it possible for capital to constantly find the conditions of minimum salary.Online labour is especially applicable to this description of the new definition of work. For example, work that increasingly depends on emails, instant correspondence across time zones, and devices that otherwise bring work home from the office in any number of ways, creates a mental environment where time is no longer marked into firm blocks.Indeed, the “work day" is all day, every day, and time is now a far more fluid concept than before. Amazon’s Mechanical Turk platform, on which low-income workers sell their time performing menial creative tasks for pennies per hour, is a particularly dystopic example of this.A radically different form of work is that of providing personal data for profit. This online data work is particularly insidious for two main reasons. First, because it is often not recognized as work at all. You might not think that messaging a pal about your new pair of headphones is work, but labour theorists like Maurizio Lazzarato disagree. Second, because workers are completely cut out of the data profit loop, although that may be changing.Image: ProducerMatthew/WikimediaThese points, taken together, paint a pretty dismal picture of the future of humans living with robotic labour under capitalism. It’s likely that we’ll be working more, and at shitty jobs. The question is: what kind of work, and exactly how shitty?In my opinion, being anti-robot or anti-technology is not a very helpful position to take. There’s no inherent reason that automation could not be harnessed to provide more social good than harm. No, a technologically-motivated movement is not what’s needed. Instead, a political one that aims to divest technological advancement from the motives of capitalism is in order.Read: Don't Fear the Robots Taking Your Job, Fear the Monopolies Behind ThemSome people are already working toward this. The basic income movement, which calls for a minimum salary to be paid out to every living human regardless of employment status, is a good start, because it implies a significant departure from the purely economic language of austerity in political thought and argues for a basic income for the salient reason that we’re human and we deserve to live. However, if we really want to change the way things are headed, more will be needed.At a time when so many of us are looking towards the future, one particular possibility is continually ignored: a future without capitalism. Work without capitalism, free time without capitalism, and, yes, even robots without capitalism. Perhaps only then could we build the foundations of a future world where technology works for all of us, and not just the privileged few.

Picture 1

Sailing on Solar Winds

A 64.5-foot-wide test version of the Sunjammer solar sail sits unfurled in a large vacuum chamber during tests at NASA’s Plum Brook facility in Ohio. Slated for launch in 2017, the full-size 124-foot-wide version will be the largest orbiting solar sail ever, employing 13,000 square feet of thin Kapton film to harness the weak but constant force of solar photons. Movable vanes at the sail’s four corners act like rudders so operators can help it achieve and maintain a gravitationally stable solar orbit. Despite its size, the sail weighs just 70 pounds and will carry scientific instruments to observe space weather.

Picture 1

A Physicist's View of the Afterlife: Weird Quantum Physics

Dr. Alan Hugenot discusses the science of the afterlife at the IANDS 2014 Conference in Newport Beach, Calif., on Aug. 29, 2014. (Tara MacIsaac/Epoch Times)More in Beyond ScienceProfound Dreams That May Be More Than Dreams—and Their Unexpected UsesThe Neuroscience of Near-Death ExperiencesLeaving Their Bodies, These People Saw Things Later VerifiedThe universe is full of mysteries that challenge our current knowledge. In "Beyond Science" Epoch Times collects stories about these strange phenomena to stimulate the imagination and open up previously undreamed of possibilities. Are they true? You decide.NEWPORT BEACH, Calif.Dr. Alan Ross Hugenot has spent decades contemplating the conundrums of physics, along with the enigma of human consciousness.Hugenot holds a doctorate of science in mechanical engineering, and has had a successful career in marine engineering, serving on committees that write the ship-building standards for the United States. He studied physics and mechanical engineering at the Oregon Institute of Technology. “I did things using Newtonian physics to create ships," he said, “but the whole time, I knew better. There’s this whole other world that our five senses don’t register." He gave a talk on the science of the afterlife at the International Association for Near-Death Studies (IANDS) 2014 Conference in Newport Beach, Calif., on Aug. 29.Exploring the scientific theories related to this other world, Hugenot has wondered whether the consciousness of living human beings as well as the “souls" of the dead reside in dark matter or dark energy. He has pondered the implications of the power our consciousness seems to have over physical reality.Hugenot told of a near-death experience in the 1970s during which he experienced part of this other world. He found it “more real than this place."These matters aren’t only intellectual curiosities for Hugenot; they bear on a profound experience that has changed his worldview.Hugenot summarized some theories in physics, interpreting how they may point to the existence of a consciousness independent of the brain and to the existence of an afterlife on another plane. He noted that further investigation (reliant on further funding) would be needed to verify his postulates. He also noted challenges in trying to verify these ideas in a traditional scientific framework.How Your Consciousness Could Exist in a ‘Cloud’(Cloud concept image via Shutterstock)Hugenot said the human consciousness may function like the data we store in the cloud. That data can be accessed from multiple devices—your smartphone, your tablet, your desktop computer.During a near-death experience, theorized Hugenot, the mind may be fleeing a dangerous situation. We can “flip the switch and go to the other computer," he said.“The nexus of my consciousness is in my head, but the locus of my consciousness—where is it really? It’s outside my body. Because inside and outside is an illusion."Space may not exist, or at least not in the way we commonly understand it, he said, citing Dr. John Bell’s non-locality theorem. “[It's a] hard one to get; we love our space," he joked.Non-locality refers to the ability of two objects to instantaneously know about each other’s states, even if they’re separated by vast distances. It is related to the phenomenon of entanglement: particle A and particle B interact, and thereafter remain mysteriously bonded. When particle A undergoes a change, particle B undergoes the same change; A and B have, in many ways, lost their individuality and behave as a single entity.Bell’s theorem has been verified by many scientists over the years and is part of mainstream quantum physics. Hugenot’s ideas about the consciousness existing inside and outside of the human body at the same time build on this theorem, but remain outside the mainstream.Is the Afterlife in Dark Matter, or Maybe in Another Dimension?What scientists have observed accounts for an estimated 4 percent of our universe. Dark energy and dark matter comprise the other 96 percent. Scientists don’t really know what dark energy and matter are, and their existence is only perceived because of the effects they appear to have on observable matter.Hugenot said: “This undiscerned 96 percent of the universe … gives us plenty of room for both consciousness and the afterlife to exist in."Perhaps the consciousness exists in another dimension, Hugenot said. String Theory, much-discussed in mainstream physics, holds that other dimensions exist beyond the four-dimensional concept of the universe. String Theory views the universe as a world of very thin, vibrating strings. The strings are thought to project from a lower-dimensional cosmos, one that is simpler, flatter, and without gravity.Why Ghosts Can Go Through Walls—and You Can Too(Concept image of a “door to heaven" via Shutterstock)Hugenot said that reaching another dimension could be a matter of belief. Maybe our bodies could pass through walls if we really believed they could.“My whole soul believes in 3-D, so I can’t go through the wall," he said. He looked at some experiments that have shown the power human consciousness has to influence physical reality.Light Can Be Either a Particle or a Wave—Depending on Your ThoughtsConsciousness seems to have a physical impact on matter. The famed double-slit experiment (explained in simple terms in the video above) shocked physicists when it showed that photons (light particles) act differently when they are observed than when no one is watching.Essentially, the observer can cause the photons to take either the particle or the wave form by the very act of measuring; they aren’t fixed in one form as expected.Particles exist as potential, Hugenot said, and the observer determines what form they take. He noted that the influence of a researcher’s mind on his or her experiment has serious implications: “If a skeptic wants to replicate what a ‘believer’ found in their experiment, the skeptic can’t do it, because … [it's going to go] the way that guy wants to see it and not the way the other guy wants to see it."Hugenot asked, if potential only takes form when observed, who or what was the observer of the Big Bang? His answer is, simply, “consciousness."Princeton Experiments Show the Mind Can Influence Electronic DevicesPrinceton Engineering Anomalies Research Lab (PEAR) at Princeton University is famous for experiments it conducted showing our minds may actually affect the operations of electronic devices. Over many years, PEAR researchers conducted millions of experiments with hundreds of people. A typical example of such an experiment is as follows:A random event generator (REG) is an electronic device that can produce bits representing either 0 or 1. Study participants would try to influence the REG either way, toward 0 or toward 1. If the events showed a significant favor in the direction of the person’s will above what chance would dictate, it suggested the person’s will influenced the machine.The cumulative finding was that the human mind can slightly influence the machine. Though the influence was slight, the consistency was significant. Over the course of so many trials, the statistical power increased. The probability of these results happening by chance rather than by an influence of the human mind is less than 1 to 1 billion.Follow @TaraMacIsaac on Twitter, visit the Epoch Times Beyond Science page on Facebook, and subscribe to the Beyond Science newsletter to continue exploring the new frontiers of science!

Picture 1

The New Urban Cemetery

The future of burial might take us from cradle to compost. A walk through a cemetery, amongst the aging steady headstones, under leafy trees, might seem like a walk back in time. Death is one of the places where tradition and superstition are incredibly strong. Change is met with skepticism. Cemeteries are forever, right?Not exactly. The graveyard as Americans know it is of relatively recent vintage. The idea that each of us would get our own little slice of a vast green lawn is an idea that didn't fully take hold in the United States until the early 1800s.And these places where the dead go are going to continue to evolve. The future of graveyards is coming, and here’s what it might look like.At Arnos Vale Cemetery in the UK, visitors can get everything from a nice lunch to a yoga lesson. The cemetery has long been committed to bringing people in—and they host everything from dog walking excursions to weddings. And last year they launched something called the Future Cemetery—an experiment in turning a graveyard into an interactive experience. “We all know death is in the future, we just want to make the future more visible," says John Troyer, a professor at the University of Bath and a founder of the project.The idea of the Future Cemetery is to create a place for people to connect with death. What that actually means and looks like is still in development, Troyer says, but in the first stage of the project they did everything from projections to audio installations. Now, they’re working on developing augmented reality experiences in cemeteries—elements that are only visible with certain devices and if you know they’re there. The idea is to allow people to add to their own cemetery experience without infringing on others.That could mean things like projections of old photos of a person on their tombstone, or the voices of the dead reading passages. It could mean an augmented map layer that gives visitors more information about the lives of the people buried there, or a live performance of someone’s favorite play. “There will always be a cemetery-like space," Troyer says. “What’s going on right now is a rethinking of what the cemetery could be."For some, rethinking cemeteries involves more than augmenting the traditional structure of a sprawling field of headstones. The Urban Death Project takes a decidedly different tact. Rather than taking a loved one to a cemetery or crematorium, architect Katrina Spade has designed a space where bodies are composted into reusable earth. This is essentially what happens to bodies in cemeteries eventually, she points out. The Urban Death project simply consolidates and celebrates that process. “I love the idea that we could have a positive impact on the environment, from soil regeneration to climate change," Spade says, “and I really like that idea that we could be productive one time after we die."Spade’s project is motivated by a number of things. The first is religion, or more precisely the lack of it. “I was thinking about my own mortality, and I was thinking about my family, and we’re all non-religious," she says. “If you have a loved one that dies, many people have a religious figure to turn to, and if you don’t have that to turn to, what kind of guidance is out there?" This may resonate with others: According to a Pew survey, 16.1% of adults are unaffiliated with a religion. That is twice the number of adults who say they were unaffiliated with a religion as kids—those who grew up with religion are abandoning it in droves.The survey finds that the number of people who say they are unaffiliated with any particular faith today (16.1%) is more than double the number who say they were not affiliated with any particular religion as children.The second is the environment, and the harsh toll that the funeral industry can take on it. The UDP website lists some statistics about burials: We bury 30 million board feet of hardwood and 90,000 tons of steel each year, and use 750,000 gallons of embalming fluid. “I asked myself if I could find a new system for the disposal of our dead that honored decomposition, and created the ritual for those who are non-religious, and something that would reconnect with nature in an urban setting. Wouldn’t you like to go on your lunch break and see the plants that are growing from the actual people who lived in this city before you?"Spade is quick to point out that she’s not asking anybody to skip burial if burial is meaningful to them. “If traditional burial is appealing to someone—great. It’s environmentally fraught but so are a lot of things we do," she told me. Rather, she’s looking to give people another option. And she recognizes that the idea of a loved one being composted along side other bodies is hard for some to get behind. But she’s also already had interest. “We’ve had some people offer up their bodies," she says, “and the timing might not work out but it’s been really inspiring and emotional."A cornerstone of many of these urban cemetery design is the issue of space—as cities get bigger and space gets more constrained, there simply won’t be room for the same kinds of cemeteries we’ve been using. The Urban Death Project can handle hundreds of bodies without increasing in size. Others have proposed vertical cemeteries—rather than having bodies lay on their back, these would have them slotted into the ground feet first. But Caitlin Doughty, a mortician and death theorist, isn’t convinced that we’re out of burial space, especially not in the United States. “There is room in cities for cemeteries. Just build one less Chipotle, or one less Target," she told me. “If you’ve ever flown across the United States, you know we’re not out of space."For Doughty, bringing cemeteries into the cities rather than pushing them out into rural areas is a question of updating our relationship with death, not of efficiency or space constraints. “It’s important to have the bodies inside the city as reminders of mortality," she says, echoing Troyer’s earlier goal.Troyer points out that while we may see a cemetery as a decidedly non-technological place, it’s actually full of innovation. “Cemeteries are layer after layer of human invention, and because it’s non-digital it goes past us as being technological." From the sign systems we use to the ways we put bodes in the ground to the gravestones, each piece is a kind of technology. “Burial has been one of the most significant and pervasive human inventions ever." In that sense, Troyer says, what might seem like an unlikely match between technology and cemeteries is actually a natural one. The challenge in updating a cemetery, especially updating it in a technological sense, is that we’re talking about three different speeds. Cemeteries operate on the scale of hundreds of years, if not more. In theory they’re meant to keep, or at least memorialize, bodies essentially forever. Then there is the speed of a human life—the people whose bodies will have to be dealt with, and whose minds will have to be changed about burial. And the third is digital technology—which has been accelerating for decades. The future of cemeteries lies in jumping between those three speeds, finding something that is both lasting and meaningful.“If you’re just there to show off some gizmo, that’s not going to have a long term applicability, says Troyer. “You need something that really connects with this idea of death. So you have to start with death and let that inform that technology."When they started, Troyer said one of the first things they agreed upon was that they weren’t making an app. “Everybody said: Oh, are you going to have an app? Why would we need an app? There’s a cautionary tale here—don’t fall into a language of innovation that isn’t necessary," he says. “It isn’t necessary to disrupt the cemetery."Doughty says that when thinking about changing rituals surrounding death there is really only one golden rule regardless of the technologies. “What’s essential, if I had to pick one thing, is that those who remain feel good about it," she says. More from The AtlanticHow Gore-Tex Was BornThe Built-From-Scratch HeartThe Accidental Discovery of LSD

Picture 1

Near Earth Object Program - NASA

Near-Earth Objects (NEOs) are comets and asteroids that have been nudged by the gravitational attraction of nearby planets into orbits that allow them to enter the Earth's neighborhood. Composed mostly of water ice with embedded dust particles, comets originally formed in the cold outer planetary system while most of the rocky asteroids formed in the warmer inner solar system between the orbits of Mars and Jupiter. The scientific interest in comets and asteroids is due largely to their status as the relatively unchanged remnant debris from the solar system formation process some 4.6 billion years ago. The giant outer planets (Jupiter, Saturn, Uranus, and Neptune) formed from an agglomeration of billions of comets and the left over bits and pieces from this formation process are the comets we see today. Likewise, today's asteroids are the bits and pieces left over from the initial agglomeration of the inner planets that include Mercury, Venus, Earth, and Mars. As the primitive, leftover building blocks of the solar system formation process, comets and asteroids offer clues to the chemical mixture from which the planets formed some 4.6 billion years ago. If we wish to know the composition of the primordial mixture from which the planets formed, then we must determine the chemical constituents of the leftover debris from this formation process - the comets and asteroids.NEO Groups In terms of orbital elements, NEOs are asteroids and comets with perihelion distance q less than 1.3 AU. Near-Earth Comets (NECs) are further restricted to include only short-period comets (i.e orbital period P less than 200 years). The vast majority of NEOs are asteroids, referred to as Near-Earth Asteroids (NEAs). NEAs are divided into groups (Aten, Apollo, Amor) according to their perihelion distance (q), aphelion distance (Q) and their semi-major axes (a). GroupDescriptionDefinitionNECsNear-Earth CometsqPNEAsNear-Earth AsteroidsqAtensEarth-crossing NEAs with semi-major axes smaller than Earth's (named after asteroid 2062 Aten).aQ>0.983 AUApollosEarth-crossing NEAs with semi-major axes larger than Earth's (named after asteroid 1862 Apollo).a>1.0 AU, qAmorsEarth-approaching NEAs with orbits exterior to Earth's but interior to Mars' (named after asteroid 1221 Amor).a>1.0 AU, 1.017qPHAsPotentially Hazardous Asteriods: NEAs whose Minimum Orbit Intersection Distance (MOID) with the Earth is 0.05 AU or less and whose absolute magnitude (H) is 22.0 or brighter.MOIDHWhat is a PHA? Potentially Hazardous Asteroids (PHAs) are currently defined based on parameters that measure the asteroid's potential to make threatening close approaches to the Earth. Specifically, all asteroids with an Earth Minimum Orbit Intersection Distance (MOID) of 0.05 AU or less and an absolute magnitude (H) of 22.0 or less are considered PHAs. In other words, asteroids that can't get any closer to the Earth (i.e. MOID) than 0.05 AU (roughly 7,480,000 km or 4,650,000 mi) or are smaller than about 150 m (500 ft) in diameter (i.e. H = 22.0 with assumed albedo of 13%) are not considered PHAs. There are currently 586 known PHAs. This "potential" to make close Earth approaches does not mean a PHA will impact the Earth. It only means there is a possibility for such a threat. By monitoring these PHAs and updating their orbits as new observations become available, we can better predict the close-approach statistics and thus their Earth-impact threat. For more information about this topic please visit: http://neo.jpl.nasa.gov/neo/ Excerpt from the Near Earth Object Program

Picture 1

NASA developing submarine to research Titan’s oceans

An artist's imagination of hydrocarbon pools, icy and rocky terrain on the surface of Saturn's largest moon Titan. Image credit: Steven Hobbs (Brisbane, Queensland, Australia).Researchers are currently working with $100,000 worth of government funding to try and design a submarine that might someday be able to explore the depths of the sea on Saturn’s moon, Titan.NASA announced in early June that it selected a proposed space submarine project as one of the 12 concepts that researchers will have nine months-to-one year to strive towards with the help of a hefty grant; at the end of that trial period, the agency may elect to hand over another $500,000 to any projects from the first round of prototyping deemed worthy of participating in a two-year, Phase II stage.Steven Oleson of NASA’s Glenn Research Center in Cleveland, Ohio wrote in his initial proposal for the Titan submarine that the moon of Saturn provides scientists with an unique template for space tests, but has up until now hardly been fully explored.“Titan is unique in the outer solar system in that it is the only one of the bodies outside the Earth with liquid lakes and seas on its surface. The Titanian seas, however, are not composed of water, like Earth’s seas, but are seas of liquid hydrocarbons," Oleson wrote. “What lies beneath the surface of Titan’s seas? We propose to develop a conceptual design of a submersible autonomous vehicle (submarine) to explore extraterrestrial seas."Titan Submarine: Exploring the Depths of Kraken (Image from NASA.org)Specifically, Oleson’s proposal calls for creating a submarine that will go to Titan’s largest northern sea, Kraken Mare, in theory, and carry out tests to provide “unprecedented knowledge of an extraterrestrial sea" while “expanding NASA’s existing capabilities in planetary exploration to include in situ nautical operations," according to his statement.Although NASA has approved other proposals this year along the lines of the “swarm flyby gravimetry" and a “Mars ecopoiesis test bed," the ambitious Titan submarine project managed to make headlines right away after the agency announced three months ago that it had been accepted into NASA's Innovative Advanced Concepts (NIAC) program."It's a very far-out idea, but it's something that I think we can definitely do engineering-wise,"Oleson told NBC News back in June."The focus for us is trying to get a vehicle that will operate in a hydrocarbon sea," Oleson said. "Think of it as liquid natural gas. How would you get a vehicle to operate in there?"According to a Washington Post report from earlier this year, though, the spacecraft might be able to accomplish as much be relying on that very hydrocarbon “sea water" as a fuel source, “since the methane won’t interfere with radio signals, this might make it possible to connect with an orbiting satellite in near real time."Should that much be accomplished, Olseon added in his proposal, then NASA may soon have even new limits to test.“By addressing the challenges of autonomous submersible exploration in a cold outer solar system environment, Titan Sub serves as a pathfinder for even more exotic future exploration of the subsurface water oceans of Europa," he wrote.

Picture 1

Scientists say they can slow aging by activating a gene with a 'remote'

Biologists conducted a study on fruit flies that found activating a gene called AMPK increased the flies' lives by 30 percentThey hope the treatment could be used to increase human lifespan UCLA biologists says they can slow the aging process by activating a gene by 'remote control.'Scientists at the university's Molecular Biology Institute published a paper this month that found activating a gene called AMPK in fruit fly cells increased the insects' lifespans by about 30 percent and lived healthier lives.The treatment could one day be used to increase human lifespan and stave off disease by stimulating the gene in targeted organs or organ systems.Triggered: By activating AMPK in the cells of fruit flies, researchers at UCLA found they could significantly increase their lifespanThe study found that when AMPK was increased in cells in the nervous system, not only did the those cells age slower but the cells in the intestines did as well.The reverse was true when AMPK was targeted in intestinal cells.After treatment, the fruit flies in the study increased their lifespan from six weeks to eight weeks.By tweaking levels of AMPK, scientists were allowing cells to dump 'cellular garbage,' components that cells discard due to damage or age.Discarding this material earlier avoids damage to the cell and prevents the aging process.David Walker, associate professor at UCLA and senior author of the research, is hopeful that the treatment could one day be used to prevent the debilitating diseases associated with aging.New life: Biologists at the Molecular Biology Institute at UCLA are hopeful that this research could prevent aging-related diseases like Parkinson's"Instead of studying the diseases of aging—Parkinson's disease, Alzheimer's disease, cancer, stroke, cardiovascular disease, diabetes—one by one, we believe it may be possible to intervene in the aging process and delay the onset of many of these diseases," said Walker.UCLA Newsroom writes fruit flies are used in gene studies because scientists have already cataloged the entirety of the fruit fly's genes and can activate targeted genes.The lead author of the study said that a drug used to treat diabetes Type 2 could be used to activate AMPK in humans.Share or comment on this articlesharesComments ()Share what you think View all The comments below have not been moderated. rogermanpistol, boston, United States, 2 days agoMore like they find a way to do this to pay off a debt you owe. Gary, Southampton, 3 days ago"UCLA biologists says they can slow the aging process by activating a gene by 'remote control.'"__As opposed to what? Poking it with their finger?Ray in L.A., Hollywood, United States, 2 days agoWe do everything in L.A. using TV remotes. Frank, North Shields, United Kingdom, 3 days agoDoes this mean that when I am 80 in a couple of years I will look like a fruit fly ? Can't wait. That should be an improvement. Simon, Farnham_UK, 3 days agoWhat could those fruit flies do with all that spare time? Buzz around a little more...seek out more dog poop? Get in front of TV screen at inopportune moments? oh decisions decisions... lolJohn, Pasadena_California, 2 days agoReally! Just what we need, fruit flies that live longer! Pesky things. Then we'd have go for an entire YEAR without bananas in the house, just to get rid of them. I don't have a dog, so I really can't comment on your other speculation... Spiritus Veritas, Berkshire, United Kingdom, 3 days agoNo thanks! PiratePaddy, FreeLand, United States, 3 days agoWhen it's my time to go, I will go then. Not interested in hanging around 30 more years in an old body not able to do much, and hear of the demise of society on the radio each day.Ray in L.A., Hollywood, United States, 2 days agoRadio? Wow, you are old! Jim, Omaha, 3 days agoThe problem, as always, will be finding the remote. ewt, Cambridge, 3 days agomy cat doesn't wear jeans. Noggin The Nog, Liverpool, United Kingdom, 3 days agoOnce you've got your mortgage out of the way, if you can continue for longer at content middle aged life quality, of course you would want to live and enjoy that for longer. If it expands lifespan, it would also delay disease and illness. johnolfc24, liverpool, United Kingdom, 3 days agothe goverments would never allow this to come in as they dont want people to live longer!! costs them more in everything!!The views expressed in the contents above are those of our users and do not necessarily reflect the views of MailOnline.We are no longer accepting comments on this article.Who is this week's top commenter? Find out now More top stories

Picture 1

AB blood group people likelier to face dementia when older

A new study has found that people belong to blood group AB face more risk of developing memory loss in their later years than people with other blood types. The study found that people with AB blood, which is the least common blood type, were 82% more likely to develop the thinking and memory problems that could lead to dementia. Previous studies have shown that people with type O blood have a lower risk of heart disease and stroke, factors that can increase the risk of memory loss and dementia. The study of more than 30,000 people followed for an average of 3.4 years. In those who had no memory or thinking problems at the beginning, the study identified 495 participants who developed thinking and memory problems, or cognitive impairment, during the study. They were compared to 587 people with no cognitive problems.People with AB blood type made up 6% of the group who developed cognitive impairment, which was higher than the 4 % found in the US population.Study author Mary Cushman of the University of Vermont College of Medicine in Burlington, said that the study also found that blood type was also related to other vascular conditions like stroke, hence the findings highlighted the connections between vascular issues and brain health. However, more research was needed to confirm these results.Researchers also looked at blood levels of factor VIII, a protein that helps blood to clot. High levels of factor VIII were related to higher risk of cognitive impairment and dementia. People in this study with higher levels of factor VIII were 24 % more likely to develop thinking and memory problems than people with lower levels of the protein. People with AB blood had a higher average level of factor VIII than people with other blood types.The study is published in published in online issue of Neurology, the medical journal of the American Academy of Neurology.

Picture 1

Activating single gene could extend human lifespan by 30% - scientists

Reuters / Susana VeraIn an experiment on fruit flies, UCLA biologists activated just one gene, AMPK, which extended their lifespan by nearly a third, by helping them to get rid of “cellular garbage" causing old age diseases such as Parkinson’s. Humans have the same gene.“Instead of studying the diseases of aging — Parkinson’s disease, Alzheimer's disease, cancer, stroke, cardiovascular disease, diabetes — one by one, we believe it may be possible to intervene in the aging process and delay the onset of many of these diseases," said author David Walker, an associate professor of integrative biology and physiology at UCLA, whose paper was published last week in the scientific journal Cell Reports.“We are not there yet, and it could, of course, take many years, but that is our goal and we think it is realistic."David Walker (Photo: UCLA) UCLA’s laboratory conducted the study on 100,000 fruit flies, used because they have been genetically mapped, and scientists can easily mutate just one gene within a population, limiting variables, and ensuring a perfectly controlled experiment. Those flies with the gene activated in their intestines lived just over eight weeks, instead of the usual six, and, almost as crucially, remained healthier for longer into their lifespans. Projected onto the current US life expectancy of 78, this would correspond to an average lifetime of 101 years.The impressive results were achieved by activating a process called autophagy, which is stimulated by AMPK.Autophagy – which translates from Greek as 'eating oneself' – allows cells to isolate and discard old, dysfunctional fragments, known as cellular garbage, which can damage healthy cells. Many of the old-age diseases are widely thought to result from decreased rates of autophagy, which eventually build up millions of unhealthy cells in the body.While humans have the AMPK gene, in most people, it is 'turned off'.Researchers also found that switching on the gene in one part of the body results in its activation elsewhere.“A really interesting finding was when [lead author] Matthew Ulgherait activated AMPK in the nervous system, he saw evidence of increased levels of autophagy in not only the brain, but also in the intestine. And vice versa: Activating AMPK in the intestine produced increased levels of autophagy in the brain — and perhaps elsewhere too," Walker said.Fruit flies (Reuters)This means that in the future, doctors could perform treatments in easier to reach areas, such as the stomach, even though the main benefits of the therapy could be in harder-to-access ones, like the brain.The wider conclusions drawn by the team are not just about the single gene AMPK, but demonstrate the key role of autophagy in longevity.“Matt moved beyond correlation and established causality," Walker said. “He showed that the activation of autophagy was both necessary to see the anti-aging effects and sufficient; that he could bypass AMPK and directly target autophagy."Intriguingly, while the benefits of genetic AMPK treatment appear to be years away, there is already a drug on the market that stimulates existing AMPK genes, which are activated when cells reach a low energy level, as a sort of repair mechanism.Metformin was synthesized as long ago as 1922, and has been widely used to fight diabetes since the late 1950s, and can now be bought cheaply as a generic. Despite considerable side effects, in recent years it has been touted in multiple studies as decreasing the incidence of cancer and heart disease, and is already used by some as an anti-aging drug, though it cannot be prescribed as such. This appears to dovetail with UCLA’s research on AMPK, which was acknowledged by Walker, who stopped short of advising healthy people to take metformin.

Picture 1

Team unlocks new way to find Earth-like planets

The Venus Zone is the area around a star where a planet is likely to show similar conditions to Venus, researchers say. As scientists continue to search the stars for habitable planets, they can thank a San Francisco astronomer for helping develop a new tool to use in that galactic quest. Stephen Kane, a professor at San Francisco State University, and two other scientists have reached a milestone in predicting what planets are most like Venus, which down the road could help determine the real difference between Earth and its sister planet. Considered sister planets because of their similar size and appearance, and likely having formed the same way, Venus and Earth actually have dramatically different surface conditions. The surface temperature of Venus is nearly 900 degrees Fahrenheit -- the hottest surface location in our solar system -- and its atmospheric pressure is about 90 times that of Earth. "The atmosphere [of Venus] has gone a completely different path than Earth," said Kane, who is the lead author of a study on Venus-like planets that was published Wednesday online in the Astrophysical Journal Letters. "What we want to do is understand, how did things go so wrong on Venus?" On Wednesday, Kane revealed the first step in establishing what might have happened to Venus to place it on the "opposite end of the spectrum of habitability" from Earth.Kane's research, which involves looking for Earth-size planets in our corner of the Milky Way Galaxy with NASA's Kepler telescope since 2009, led to the landmark identity of the Venus Zone, the area around a star in which a planet is likely to exhibit the uninhabitable conditions found on Venus.Previously, because Venus and Earth are so similar in size, there was no way to distinguish in another planet which atmosphere was more closely mirrored. The Venus Zone allows scientists to determine the frequency of such unlivable planets in other solar systems."Our estimate for stars like the sun is that about half of those have a planet like Venus," Kane said. Specifically, of stars like the sun, about 45 percent -- or 43 of the more than 4,000 planetary candidates discovered by the Kepler mission -- have planets similar to Venus.The Venus Zone is calculated by determining where a planet like Venus or Earth would actually start to lose its atmosphere because it's so close to a star.With this new method, scientists can look at the atmospheric conditions of those planets and determine whether they have runaway greenhouse effects like Venus, where liquid water has long since evaporated due to the high temperatures."That means ... we can start to really get a feel for what conditions a planet needs in order to become like Venus," Kane said.Last spring, Kane's research helped lead to the discovery of the planet Kepler-186f. It is the closest scientists have come to finding a potentially habitable planet other than Earth.Kepler-186f, the fifth and outermost planet found to be orbiting the dwarf star Kepler-186, is both similar to the Earth's size and within the habitable zone of its star -- criteria that was never before known to overlap in a planet outside our solar system.However, the transit method -- which scientists used to identify Kepler-186f by detecting potential planets as their orbits cross in front of their star, causing a tiny but periodic dimming of the star's brightness -- doesn't verify a planet's atmosphere.The Venus Zone marks the first step in helping scientists discern what living conditions such a planet might offer -- mostly oxygen, like Earth, or carbon dioxide, similar to Venus.Kane, who moved to San Francisco in 2013 to teach astronomy at SFSU, has been studying exoplanets, or extra-solar planets, since the field developed in 1995.

Picture 1

Atomic force microscopy image of DNA origami made using both the new technique (

Atomic force microscopy image of DNA origami made using both the new technique (the large shapes) and the previous technique (the small ones). Image: Alexandria MarchiResearchers from North Carolina State Univ., Duke Univ. and the Univ. of Copenhagen have created the world’s largest DNA origami, which are nanoscale constructions with applications ranging from biomedical research to nanoelectronics. “These origami can be customized for use in everything from studying cell behavior to creating templates for the nanofabrication of electronic components," says Dr. Thom LaBean, an associate professor of materials science and engineering at NC State and senior author of a paper describing the work.DNA origami are self-assembling biochemical structures that are made up of two types of DNA. To make DNA origami, researchers begin with a biologically derived strand of DNA called the scaffold strand. The researchers then design customized synthetic strands of DNA, called staple strands. Each staple strand is made up of a specific sequence of bases (adenine, cytosine, thaline and guanine—the building blocks of DNA), which is designed to pair with specific subsequences on the scaffold strand.The staple strands are introduced into a solution containing the scaffold strand, and the solution is then heated and cooled. During this process, each staple strand attaches to specific sections of the scaffold strand, pulling those sections together and folding the scaffold strand into a specific shape.The standard for DNA origami has long been limited to a scaffold strand that is made up of 7,249 bases, creating structures that measure roughly 70 nm by 90 nm, though the shapes may vary.However, the research team led by LaBean has now created DNA origami consisting of 51,466 bases, measuring approximately 200 nm by 300 nm.“We had to do two things to make this viable," says Dr. Alexandria Marchi, lead author of the paper and a postdoctoral researcher at Duke. “First we had to develop a custom scaffold strand that contained 51 kilobases. We did that with the help of molecular biologist Stanley Brown at the Univ. of Copenhagen.“Second, in order to make this economically feasible, we had to find a cost-effective way of synthesizing staple strands—because we went from needing 220 staple strands to needing more than 1,600," Marchi says.The researchers did this by using what is essentially a converted inkjet printer to synthesize DNA directly onto a plastic chip.“The technique we used not only creates large DNA origami, but has a fairly uniform output," LaBean says. “More than 90% of the origami are self-assembling properly."The paper is published online in Nano Letters.Source: North Carolina State Univ.

Picture 1

Bringing the dead back to life

Bringing the dead back to life A radical procedure that involves replacing a patient's blood with cold salt water could retrieve people from the brink of death, says David Robson.“When you are at 10C, with no brain activity, no heartbeat, no blood – everyone would agree that you’re dead," says Peter Rhee at the University of Arizona, Tucson. “But we can still bring you back."Rhee isn’t exaggerating. With Samuel Tisherman, at the University of Maryland, College Park, he has shown that it’s possible to keep bodies in ‘suspended animation’ for hours at a time. The procedure, so far tested on animals, is about as radical as any medical procedure comes: it involves draining the body of its blood and cooling it more than 20C below normal body temperature.Once the injury is fixed, blood is pumped once again through the veins, and the body is slowly warmed back up. “As the blood is pumped in, the body turns pink right away," says Rhee. At a certain temperature, the heart flickers into life of its own accord. “It’s quite curious, at 30C the heart will beat once, as if out of nowhere, then again – then as it gets even warmer it picks up all by itself." Astonishingly, the animals in their experiments show very few ill-effects once they’ve woken up. “They’d be groggy for a little bit but back to normal the day after," says Tisherman.Tisherman created headlines around the world earlier this year, when he announced that they were ready to begin human trials of the technique on gunshot victims in Pittsburgh, Pennsylvania. The first patients will have been so badly wounded that their hearts have stopped beating, meaning that this is their last hope. “Cheating death with ‘suspended animation’" is how CNN put it; “Killing a patient to save his life" was the New York Times’ take.Hyped upThe news coverage has sometimes offended Tisherman’s cautious sensibility. During our conversation, he comes across as a thoughtful, measured man, who is careful not to oversell his research. He is particularly wary of using the term ‘suspended animation’. “My concern isn’t that it’s inaccurate – it’s that when people think of the term, they think about space travellers being frozen and woken up on Jupiter, or Han Solo in Star Wars," he says. “That doesn’t help, because it’s important for the public to know it’s not science fiction – it’s based on experimental work and is being studied in a disciplined manner, before we use it to stop people dying." Rhee, who came to global attention after treating congresswoman Gabrielle Giffords after a shooting in 2011, tends to be bolder: he says he wouldn’t rule out longer-term suspended animation, in the distant future. “What we’re doing is beginning part of that experiment." The boundary between life and death is increasingly unclear (Thinkstock)Tisherman’s quest to bring people back from the brink of death began at medical school, where he studied under Peter Safar. It is an inspiring dynasty: in the 1960s Safar had pioneered cardiopulmonary resuscitation (CPR), the now familiar procedure of applying pressure to the chest cavity to try to massage the heart back to life.Safar’s work began to change our perceptions of death – blurring the point that is meant to mark the end of our lives. “We’ve all been brought up to think death is an absolute moment – when you die you can’t come back," says Sam Parnia, at the State University of New York in Stony Brook. “It used to be correct, but now with the basic discovery of CPR we’ve come to understand that the cells inside your body don’t become irreversibly ‘dead’ for hours after you’ve ‘died’… Even after you’ve become a cadaver, you’re still retrievable."Blurred lineTisherman now thinks of death as the (admittedly subjective) point at which doctors give up resuscitation as a lost cause – but even then, some people can still make a remarkable comeback. Last December, a paper in the journal Resuscitation caused a stir by suggesting that 50% of surveyed emergency doctors have witnessed ‘Lazarus phenomena’, in which a patient’s heart has begun beating again by itself, after doctors had given up hope.Kick-starting the heart is only one half of the doctor’s battle, however; the lack of oxygen after a cardiac arrest can cause serious damage to the body’s vital organs, particularly the brain. “Every minute that there’s no oxygen to those organs, they start dying," says Tisherman. His former mentor, Safar, came up with a solution to this problem too, with ‘therapeutic hypothermia’, a procedure that involves cooling the body, typically to around 33C by placing ice packs around the body, for instance. At lower temperatures, cells begin to work in slow motion, reducing their metabolism and the damage that could be caused by oxygen starvation. Some people have come back from the dead, even after CPR has stopped (Thinkstock)Combined with machines that can take over circulation and pump oxygen into the blood stream while the heart is being revived, this has helped open the window between cardiac arrest and brain death. One hospital in Texas recently reported that an 40-year-old man had survived, with his mind intact, after three-and-a-half hours of CPR. His treatment involved a constant rotation of medical students, nurses and doctors taking it in turns to perform the chest compressions. “Anybody in the room who had two arms was asked to jump in," says one of the attending doctors, Scott Taylor Bassett. Such cases are rare, however: Bassett, points out that they were only motivated to continue because the patient regained consciousness during the CPR, despite the fact that his heart was still not functioning. “During the chest compressions he would speak to us, showing he was neurologically intact," says Bassett. “I’ve never seen it before or since – it was the defining moment of the entire decision making."Buying timeSuch long-term resuscitation is currently impossible for people whose cardiac arrest is accompanied by injury from trauma – such as gunshot wounds or automobile accidents. At the moment, the surgeon’s best option is to clamp the arteries leading to the lower body, before opening the chest and massaging the heart, which pushes a little blood flow to the brain while surgeons try to stitch up the wounds. Unfortunately, the survival rate is less than one in 10.It is for this reason that Tisherman wants to plunge the body to around 10-15C, potentially giving the doctors a window of two or more hours to operate. Although this level of deep hypothermia is sometimes applied during heart surgery, Tisherman’s project is the first time that it will have been be used to revive someone who had already ‘died’ before entering the hospital. Perhaps most astonishing of all, the team drain the blood from the body and replace it with chilled saline solution. Because the body’s metabolism has stopped, the blood is not required to keep cells alive, and saline solution is the quickest way to cool the patient, explains Tisherman. The procedure involves replacing all blood in the body with saline solution (SPL)With Rhee and others, Tisherman has spent two decades building a substantial portfolio of evidence to prove that the procedure is safe, and effective. Many of the experiments involved pigs inflicted with near-fatal injuries. Mid-operation, there was no doubt that animals were about as far beyond the realms of the living as it is possible to go and then return. “The pig is as white as you can get," says Rhee. “It’s just pale, refrigerator meat." If the animals had been cooled quickly enough, however – at around 2C a minute – nearly 90% recovered when their blood was returned to their bodies, after having lain in limbo for more than an hour. “It’s the most amazing thing to witness – when the heartbeat comes back," says Rhee.Once the animals had returned back to more regular activity, the team then performed several tests to check that their brains hadn’t been damaged. For instance, before the procedure, the researchers trained some of the pigs to open a container of a certain colour, where an apple was hidden inside. After they had been revived, most of the animals remembered where to fetch their treat. Other pigs that hadn’t been trained before the operation, were instead taught the procedure soon after their recovery. They managed to learn just as quickly as the others – again suggesting that there had been no effect on their memories.Needless to say, gaining approval for human trials has been a struggle. Earlier this year, Tisherman was finally allowed to set up a pilot trial in Pittsburgh to treat patients suffering from gunshot wounds. The hospital sees about one or two such patients a month, meaning that some have already been treated with the technique since the trial began – although it is too early for Tisherman to speak about the results yet. He is also setting up a trial in Baltimore, Maryland, and all being well, Rhee will later be able to begin work at Tuscon’s trauma centre. Seeing light at the end of the tunnel is not necessarily the closest to death you can get (Thinkstock)As with any medical research, there will be some challenges in the transition from the animal experiments to the human trials. The animals received their own blood at the end of the operation, for instance – whereas the patients in this trial will need transfusions that have been sitting in blood banks for weeks. And while the animals were under anaesthesia at the time of injury, the patients won’t have been, which could change the way their body reacts to the injury. Tisherman remains optimistic, however. “We generally think that dogs and pigs respond to bleeding in a similar way to humans."Other doctors are watching with interest. “It’s very brave," says Parnia. “Many of us feel that in order to preserve the brain, we have to cool the body a lot more than we’ve done traditionally. But people have been afraid."If the trials go according to plan, Tisherman would like to extend the approach to other kinds of trauma. Gunshot victims were chosen for the initial trial because it is easier to localise the source of blood loss, but he hopes eventually to treat internal bleeding from an automobile accident, for instance. It may even, one day, be used to treat people suffering from heart attacks and other kinds of illness. Peter Rhee (right): "It’s the most amazing thing to witness – when the heartbeat comes back" (Thinkstock)Success could also pave the way for investigations into other forms of suspended animation. Some scientists are looking into whether a cocktail of drugs added to the saline solution pumped into body could further reduce the body’s metabolism and prevent injury. One promising candidate was hydrogen sulphide – the chemical that gives rotten eggs their smell – but although it has been found to reduce the metabolism of some animals, there is little evidence that it improves their chances of survival after a cardiac arrest. Tisherman instead thinks it will be better to find some potent anti-oxidants that can mop up the harmful chemicals that cause injury. For Rhee, the need for better treatment is all too urgent. He points out the fate of a patient he saw at the hospital only the day before we spoke. “He was shot in the epigastrium, right under the chest in the middle of the belly," he says. The hospital staff tried everything they could, but he still died. “It’s exactly the kind of patient we hope we could repair if we’d been able to work in a less rushed fashion."This article is part of a series on “Comebacks". Read more tales of triumphant returns:Anglophenia: Best of British revivals From ancient languages to boy bands, these British cultural artefacts fell out of favour, but were rescued from history's bin.Autos: The 3 Wheeler’s return to form From forgotten oddity to a modern sports car hero – Britain's top ‘Mog’ is on a roll.BBC America: A guide to every Doctor Who Every Doctor has one thing in common - he regenerates. Get up to speed on how each one made the ultimate comeback.Capital: The comeback kings They rose to lofty heights and fell mightily. How some one-time giants got back into good graces.Culture: Seven stages of movie stardom Famous actors often burn brightly before falling on hard times. Here's how they turn failure into success.Travel: A new dawn for Italy’s south For the first time since the Grand Tour of the 18th Century, southern Italy is registering on savvy travellers’ radars.If you would like to comment on this, or anything else you have seen on Future, head over to our Facebook or Google+ page, or message us on Twitter.

Picture 1

Toy-like rockets could someday carry tiny satellites and human ashes to space

Twenty years after its founding, Microlaunchers could finally find the interest it needs to build huge fleets of small rockets. When Microlaunchers was founded in 1995, rocket launches were dominated by large companies and governments interested in shipping huge amounts of cargo up to space.Things aren’t too different today–the space industry still revolves around getting large items like telecommunications satellites to orbit–but change is happening. Companies like SpaceX and Firefly are reducing the cost for small payloads to make it to space. In 2013, more shoebox-sized satellites known as CubeSats launched than in all years prior combined.Microlaunchers, which wants to manufacture large numbers of rockets that weigh anywhere between 220 pounds and 60 tons, is catching its second wind based on that trend. The startup would charge $50,000 to launch a single 2.2-pound CubeSat, and provide a private rocket that can launch at anytime weather permits. CEO Charles Pooley imagines rockets and launchpads small enough to be moved and handled as if they are toys. Larger rockets would be capable of carrying more cargo. Dozens could be lined up in a single area and launch every hour or two.A rendering of what a Microlaunchers launch pad and rocket might look like. Image courtesy of Microlaunchers.“These things would be far smaller and simpler than any rocket in the business," Pooley said in an interview. “As soon as it becomes physically possible to make thousands of these, (Microlaunchers rockets) will be sold."A 2.2-pound CubeSat would generally cost around $12,600 to send to low-Earth orbit through SpaceX, or $20,000 through an emerging option like Firefly. But with a SpaceX rocket, a CubeSat maker might have to wait months before space opens up. Then the launch schedule is at the mercy of the largest holders of cargo. Firefly can provide greater control for small companies, but they still need to buy most or all of the $8 or 9 million worth of cargo space it will provide each launch.Microlaunchers co-founders Pooley and COO Blair Gordon have waited 20 years to see the rockets built. This year, the company embarked on raising $600,000 on AngelList and was accepted into the Las Vegas Start Up Hive incubator. If the money comes through, Microlaunchers plans to build its first fleet of 100 rockets.“There is demand for it," Gordon said. “We just have to get to the next phase."Most CubeSats have been dedicated to collecting images and other data on Earth. But Pooley and Gordon are wary of sending many more tiny satellites into Earth’s orbit. Each new satellite increases the chances of collisions, which create dangerous space debris. Instead, they would like to see small spacecraft dedicated to exploring other parts of the solar system and beyond. Each pound sent beyond low-Earth orbit will cost roughly $125,000.Easier launches for tiny amounts of cargo would also open up opportunities for a rare service: space burials. Since 1992, cremated human remains have been released into space. Microlaunchers foresees the possibility of doing thousands a month.“This is a brand new way of approaching space that does not yet exist that can," Pooley said. “The problem is getting people to grasp the idea."

Picture 1

Researchers have set the date of March 16, 2880 as being one of Earth’s possible

Researchers have set the date of March 16, 2880 as being one of Earth’s possible destruction because that day an asteroid hurtling through space has a probability of striking the globe. The scientists who have been examining the rock discovered that its body revolves so rapidly that it should have already broken apart but by some means it has remained intact on its Earth-bound path.They think the asteroid is possibly held together by cohesive forces which are known as Van der Waals. These are forces that hold molecules together and are essential for biology, chemistry and physics. Even though the discovery is considered to be a key breakthrough, researchers have no idea how to stop the asteroid at this time in history.The asteroid’s projectory was determined by scientists located at the University of Tennessee at Knoxville. Prior data has revealed that asteroids are loose mounds of debris that are held together by friction and gravity. This rock, which has been named 1950 DA, is an asteroid which is two-thirds of a mile in length. It is moving about nine miles per second in relation to the Earth and it rotates once around every two hours.At this rate, the asteroid should be showing signs of breaking apart and sooner or later, but so far it has not done that. In fact, the spin is so fast at its equator, scientists believe that 1950 DA is successfully experiencing negative gravity. The existence of cohesive forces has been forecast in small asteroids, but there has never been conclusive proof seen before.Researchers think Asteroid 1950 DA will fly so near to the Earth it could smash into the Atlantic Ocean at nearly 40,000 miles per hour. It has been estimated that if 1950 DA was to strike the planet, it would do so with a force of nearly 45,000 mega-tons of TNT. Even though the chance of an impact is believed to only be about 0.3 percent, this signifies a risk that is nearly 50 percent better than impacts from other asteroids.Ben Rozitis, Joshua Emery and Eric MacLennan are all scientists at Department of Planetary Sciences at UT and have been trying to figure out why the rock has not come apart yet. They think the cohesive forces of the asteroid are definitely what is keeping it together. Their findings were printed up in the most edition of the science journal Nature.There has been a high level of interest turned toward into attempting to figure out how to handle the possible hazards of an asteroid impact after the Feb. 2013 asteroid impact in Russia, explained Professor Rozitis.Over the time scale of Earth’s past, asteroids around this size and bigger have sporadically slammed into the planet. It is believed that the so-called K/T Impact ended the period of the dinosaurs over 65 million years ago.Asteroid 1950 DA was first discovered back on Feb. 23, 1950. It was witnessed for just over 15 days and faded from view for a 50 year span, then it was seen again on Dec. 31, 2000 and it was recognized as being 1950 DA.That sighting on New Year’s Eve happened to be exactly 200 years to the night of the discovery of the first asteroid, Ceres, not to mention the actual beginning of the 21st Century.However researchers are far from being worried. They believe that if it is ever decided that the space rock should be diverted, having these hundreds of years of warning will permit for some sort of discovery which will allow for the asteroid to be directed away from Earth.Researchers believe the date of March 16, 2880 could be one of Earth’s possible obliteration because that day Asteroid 1950 DA has a chance of hitting the world.By Kimberly RubleSources:Science World ReportThe Daily MailInternational Business TimesBlogger Stumble Tumblr Email Reddit Share Share Share Share Share Tweet Popular Stories Robin Williams Allegedly Attempted Wrist Cutting Before Hanging Ice Bucket Challenge Continues: What Is It All For? Robin Williams Was Reportedly Devastated Over TV Show Cancellation Kim Kardashian Topless Wishes Designer Happy Birthday Sponsored Content A New Solution That Stops Snoring and Lets You Sleep How to Get a FREE Credit Check -- and Why It Matters! How to Get Freakishly Long Looking Lashes in 28 Days Weirdest Show on Wheels: Bizarre Photos of Motorcyclists 5 Words That Make You Sound Stupid Relationship Advice: 10 Traits of Miserable Couples 10 Girl Scout Cookie Facts You Probably Don't Know Magic Trick Gives Money to Those in Need ?

Picture 1

The History Inside Us

Almost all of human history is unrecorded. Every day our DNA breaks a little. Special enzymes keep our genome intact while we’re alive, but after death, once the oxygen runs out, there is no more repair. Chemical damage accumulates, and decomposition brings its own kind of collapse: membranes dissolve, enzymes leak, and bacteria multiply. How long until DNA disappears altogether? Since the delicate molecule was discovered, most scientists had assumed that the DNA of the dead was rapidly and irretrievably lost. When Svante Pääbo, now the director of the Max Planck Institute for Evolutionary Anthropology in Germany, first considered the question more than three decades ago, he dared to wonder if it might last beyond a few days or weeks. But Pääbo and other scientists have now shown that if only a few of the trillions of cells in a body escape destruction, a genome may survive for tens of thousands of years. In his first book, Neanderthal Man: In Search of Lost Genomes, Pääbo logs the genesis of one of the most groundbreaking scientific projects in the history of the human race: sequencing the genome of a Neanderthal, a human-like creature who lived until about 40,000 years ago. Pääbo’s tale is part hero’s journey and part guidebook to shattering scientific paradigms. He began dreaming about the ancients on a childhood trip to Egypt from his native Sweden. When he grew up, he attended medical school and studied molecular biology, but the romance of the past never faded. As a young researcher, he tried to mummify a calf liver in a lab oven and then extract DNA from it. Most of Pääbo’s advisors saw ancient DNA as a “quaint hobby," but he persisted through years of disappointing results, patiently awaiting technological innovation that would make the work fruitful. All the while, Pääbo became adept at recruiting researchers, luring funding, generating publicity, and finding ancient bones.Eventually, his determination paid off: in 1996, he led the effort to sequence part of the Neanderthal mitochondrial genome. (Mitochondria, which serve as cells’ energy packs, appear to be remnants of an ancient single-celled organism, and they have their own DNA, which children inherit from their mothers. This DNA is simpler to read than the full human genome.) Finally, in 2010, Pääbo and his colleagues published the full Neanderthal genome.That may have been one of the greatest feats of modern biology, yet it is also part of a much bigger story about the extraordinary utility of DNA. For a long time, we have seen the genome as a tool for predicting the future. Do we have the mutation for Huntington’s? Are we predisposed to diabetes? But it may have even more to tell us about the past: about distant events and about the network of lives, loves, and decisions that connects them.EmpiresLong before research on ancient DNA took off, Luigi Cavalli-Sforza made the first attempt to rebuild the history of the world by comparing the distribution of traits in different living populations. He started with blood types; much later, his popular 2001 book Genes, Peoples, and Languages explored demographic history via languages and genes. Big historical arcs can also be inferred from the DNA of living people, such as the fact that all non-Africans descend from a small band of humans that left Africa 60,000 years ago. The current distribution across Eurasia of a certain Y chromosome—which fathers pass to their sons—rather neatly traces the outline of the Mongolian Empire, leading researchers to propose that it comes from Genghis Khan, who pillaged and raped his way across the continent in the 13th century.But in the last few years, geneticists have found ways to explore not just big events but also the dynamics of populations through time. A 2014 study used the DNA of ancient farmers and hunter-gatherers from Europe to investigate an old question: Did farming sweep across Europe and become adopted by the resident hunter-gatherers, or did farmers sweep across the continent and replace the hunter-gatherers? The researchers sampled ancient individuals who were identified as either farmers or hunters, depending on how they were buried and what goods were buried with them. A significant difference between the DNA of the two groups was found, suggesting that even though there may have been some flow of hunter-­gatherer DNA into the farmers’ gene pool, for the most part the farmers replaced the hunter-gatherers.Looking at more recent history, Peter Ralph and Graham Coop compared small segments of the genome across Europe and found that any two modern Europeans who lived in neighboring populations, such as Belgium and Germany, shared between two and 12 ancestors over the previous 1,500 years. They identified tantalizing variations as well. Most of the common ancestors of Italians seem to have lived around 2,500 years ago, dating to the time of the Roman Republic, which preceded the Roman Empire. Though modern Italians share ancestors within the last 2,500 years, they share far fewer of them than other Europeans share with their own countrymen. In fact, Italians from different regions of Italy today have about the same number of ancestors in common with one another as they have with people from other countries. The genome reflects the fact that until the 19th century Italy was a group of small states, not the larger country we know today.In a very short amount of time, the genomes of ancient people have ­facilitated a new kind of population genetics. It reveals phenomena that we have no other way of knowing about.Significant events in British history suggest that the genetics of Wales and some remote parts of Scotland should be different from genetics in the rest of Britain, and indeed, a standard population analysis on British people separates these groups out. But this year scientists led by Peter Donnelly at Oxford uncovered a more fine-grained relationship between genetics and history. By tracking subtle patterns across the genomes of modern Britons whose ancestors lived in particular rural areas, they found at least 17 distinct clusters that probably reflect different groups in the historic population of Britain. This work could help explain what happened during the Dark Ages, when no written records were made—for example, how much ancient British DNA was swamped by the invading Saxons of the fifth century.The distribution of certain genes in modern populations tells us about cultural events and choices, too: after some groups decided to drink the milk of other mammals, they evolved the ability to tolerate lactose. The descendants of groups that didn’t make this choice don’t tolerate lactose well even today.MysteriesAnalyzing the DNA of the living is much easier than analyzing ancient DNA, which is always vulnerable to contamination. The first analyses of Neanderthal mitochondrial DNA were performed in an isolated lab that was irradiated with UV light each night to destroy DNA carried in on dust. Researchers wore face shields, sterile gloves, and other gear, and if they entered another lab, Pääbo would not allow them back that day. Still, controlling contamination only took Pääbo’s team to the starting line. The real revolution in analysis of ancient DNA came in the late 1990s, with ­second-generation DNA sequencing techniques. Pääbo replaced Sanger sequencing, invented in the 1970s, with a technique called pyrosequencing, which meant that instead of sequencing 96 fragments of ancient DNA at a time, he could sequence hundreds of thousands.Such breakthroughs made it possible to answer one of the longest-running questions about Neanderthals: did they mate with humans? There was scant evidence that they had, and Pääbo himself believed such a union was unlikely because he had found no trace of Neanderthal genetics in human mitochondrial DNA. He suspected that humans and Neanderthals were biologically incompatible. But now that the full Neanderthal genome has been sequenced, we can see that 1 to 3 percent of the genome of non-Africans living today contains variations, known as alleles, that apparently originated with Neanderthals. That indicates that humans and Neanderthals mated and had children, and that those children’s children eventually led to many of us. The fact that sub-Saharan Africans do not carry the same Neanderthal DNA suggests that Neanderthal-human hybrids were born just as humans were expanding out of Africa 60,000 years ago and before they colonized the rest of the world. In addition, the way Neanderthal alleles are distributed in the human genome tells us about the forces that shaped lives long ago, perhaps helping the earliest non-Africans adapt to colder, darker regions. Some parts of the genome with a high frequency of Neanderthal variants affect hair and skin color, and the variants probably made the first Eurasians lighter-skinned than their African ancestors.Ancient DNA will almost certainly complicate other hypotheses, like the ­African-origin story, with its single migratory human band. Ancient DNA also reveals phenomena that we have no other way of knowing about. When Pääbo and colleagues extracted DNA from a few tiny bones and a couple of teeth found in a cave in the Altai Mountains in Siberia, they discovered an entirely new sister group, the Denisovans. Indigenous Australians, Melanesians, and some groups in Asia may have up to 5 percent Denisovan DNA, in addition to their Neanderthal DNA.In a very short amount of time, a number of ancients have been sequenced by teams all over the world, and the growing library of their genomes has facilitated a new kind of population genetics. What is it that DNA won’t be able to tell us about the past? It may all come down to what happened in the first moments or days after someone’s death. If, for some reason, cells dry out quickly—if you die in a desert or a dry cave, if you are frozen or mummified—post-mortem damage to DNA can be halted, but it may never be possible to sequence DNA from remains found in wet, tropical climates. Still, even working with only the scattered remains that we have found so far, we keep gaining insights into ancient history. One of the remaining mysteries, Pääbo observes, is why modern humans, unlike their archaic cousins, spread all over the globe and dramatically reshaped the environment. What made us different? The answer, he believes, lies waiting in the ancient genomes we have already sequenced.There is some irony in the fact that Pääbo’s answer will have to wait until we get more skillful at reading our own genome. We are at the very beginning stages of understanding how the human genome works, and it is only once we know ourselves better that we will be able to see what we had in common with Neanderthals and what is truly different.Christine Kenneally is the author of The Invisible History of the Human Race, to be published in October.http://www.technologyreview.com/review/530031/the-history-inside-us/

Picture 1

Mars Rover To Create Oxygen

Nasa’s Mars 2020 rover will take a small step towards helping us directly explore the red planet, by studying how to convert its carbon dioxide atmosphere to oxygen. Jack Mustard from Brown University in the US suggests the Mars Oxygen In-situ resource utilization Experiment (MOXIE) technology could in future help refuel vehicles returning to Earth. ‘It represents an opportunity to sever the tether between Earth and exploration,’ says Mustard, who chaired the Mars 2020 science definition team. Based on the current Curiosity rover’s design, Mars 2020 will carry seven instruments, including MOXIE, together costing approximately $130 million (£77 million). MOXIE itself will be a reverse fuel cell, developed at the Massachusetts Institute of Technology, converting CO2 into oxygen and carbon monoxide via solid oxide electrolysis. The oxygen can then either be breathed by people, or burned as fuel. Read full article here: http://www.rsc.org/chemistryworld/2014/08/nasa-mars-2020-rover-carbon-dioxide-oxygen

Picture 1

"Microhabitats" Show Potential for Alien Life

An international team of researchers has found extremely small habitats that increase the potential for life on other planets while offering a way to clean up oil spills on our own. Looking at samples from the world's largest natural asphalt lake, they found active microbes in droplets as small as a microliter, which is about 1/50th of a drop of water. "We saw a huge diversity of bacteria and archaea," said Dirk Schulze-Makuch, a professor in Washington State University's School of the Environment and the only U.S. researcher on the team. "That's why we speak of an 'ecosystem,' because we have so much diversity in the water droplets." Writing in the journal Science, the researchers report they also found the microbes were actively degrading oil in the asphalt, suggesting a similar phenomenon could be used to clean up oil spills. "For me, the cool thing is I got into it from an astrobiology viewpoint, as an analog to Saturn's moon, Titan, where we have hydrocarbon lakes on the surface," said Schulze-Makuch. "But this shows astrobiology has also great environmental applications, because of the biodegradation of oil compounds." Schulze-Makuch and his colleagues in 2011 found that the 100-acre Pitch Lake, on the Caribbean island of Trinidad, was teeming with microbial life, which is also thought to increase the likelihood of life on Titan. The new paper adds a new, microscopic level of detail to how life can exist in such a harsh environment. "We discovered that there are additional habitats where we have not looked at where life can occur and thrive," said Schulze-Makuch.Analyzing the droplets' isotopic signatures and salt content, the researchers determined that they were not coming from rain or groundwater, but ancient sea water or a brine deep underground. Read the article here: http://www.sciencedaily.com/releases/2014/08/140807145738.htm

Picture 1

Vintage Spacecraft Makes Its Comeback

A 36-year-old NASA spacecraft will begin a new interplanetary science mission today (Aug. 10) when it makes a close pass by the moon. The privately controlled International Sun-Earth Explorer 3 spacecraft, also called ISEE-3, will fly by the moon at 2:16 p.m. EDT (1816 GMT). You can follow the lunar flyby live in a Google Hangout beginning at 1:30 p.m. EDT (1730 GMT) on the website SpacecraftforAll.com. The ISEE-3 spacecraft is under the control of ISEE-3 Reboot Project, a private team of engineers took control of the probe earlier this year under an agreement with NASA. The team initially hoped to move the NASA probe into a stable orbit near the Earth. But attempts failed when the team discovered that the spacecraft, which NASA launched in 1978, was out of the nitrogen pressurant needed to get the job done. Now, ISEE-3 Reboot Project engineers are focusing their efforts on an interplanetary science mission, since at least some of the probe's 13 instruments are still working. By using a network of individual radio dishes across the world, the team will listen to the ISEE-3 spacecraft for most of its orbit around the sun. Read the full article here: http://www.space.com/26783-vintage-nasa-spacecraft-moon-flyby-isee3.html Check out the live feed of the spacecraft: http://spacecraftforall.com/

Picture 1

Supercharging Your Brain

With a jolt of electricity, you might be able to enter a flow state that allows you to learn a new skill twice as fast, solve problems that have mystified you for hours, or even win a sharpshooting competition. And this just scratches the surface in terms of what we might be able to do to improve cognition as our understanding of the brain improves. With an implanted chip, the possibilities might be close to limitless. Researchers think that as we learn more about the brain, we'll be able to use electricity to boost focus, memory, learning, mathematical ability, and pattern recognition. Electric stimulation may also clear away depression and stave off cognitive decline. We'll eventually even implant computer chips that allow us to directly search the web for information or even download new skills — like Neo learning Kung-fu in The Matrix. We're heading down a path that will allow us to supercharge the brain. The key is decoding how the brain works. That's the hurdle in the way, and the one that billions of dollars in research are going towards right now. "I don't think there's any doubt we'll eventually understand the brain," says Gary Marcus, a professor of psychology at New York University, and an editor of the upcoming book “The Future of the Brain: Essays by the World’s Leading Neuroscientists." "The big question is how long it's going to take," he says. Read the full article here: http://www.businessinsider.com/brain-hacking-will-make-us-smarter-and-more-productive-2014-7#ixzz3A3dmYqqe

Picture 1

Pure Dumb Luck Saved Us from a Calamity in 2012 Sparked by One of the Strongest Solar Storms in Recorded History

On July 23, 2012, billions of tons of plasma exploded from the Sun and raced out into space in what was one of the most powerful solar storms ever recorded. And it’s only by chance that Earth wasn’t in the way of this gargantuan coronal mass ejection, or CME.“If it had hit, we would still be picking up the pieces," Daniel Baker of the University of Colorado told NASA.And that may be putting it mildly. A CME of the size that almost hit us in July of 2o12 would probably knock out satellites we depend on for modern telecommunications and also cause global blackouts lasting for months. Everything that plugs into a wall socket would be disabled. And as NASA puts it, “Most people wouldn’t even be able to flush their toilet because urban water supplies largely rely on electric pumps."For a good overview of the solar storm that almost caused this kind of mayhem in 2012, and how it compared to the notorious Carrington event of September, 1859 (which set telegraph lines on fire and caused auroral displays as far south as Cuba), check out the video above.“I have come away from our recent studies more convinced than ever that Earth and its inhabitants were incredibly fortunate that the 2012 eruption happened when it did," Baker says. “If the eruption had occurred only one week earlier, Earth would have been in the line of fire."The cloud of particles, rocketing outward at 3,000 kilometers per second — more than four times faster than a typical CME — hit NASA’s STEREO-A spacecraft. As a result of its design, and the fact that it travels in interplanetary space (a much safer place, it turns out, that inside Earth’s magnetosphere), the spacecraft survived and transmitted valuable data and imagery back to Earth.The NASA video above includes some of that imagery. Other spacecraft also captured direct and indirect evidence of the event, and you can see some of that in the video too. Earth stands about a 12 percent chance of actually being hit by material from a solar storm of this magnitude in the next decade, according to research by physicist Pete Riley published in the journal Space Weather. So it’s not a question of if we’ll be hit; it’s only a question of when. And unfortunately, the world is woefully unprepared to deal with the consequences. By Tom Yulsman | August 2, 2014 1:27 am http://blogs.discovermagazine.com/imageo/2014/08/02/luck-saved-us-from-calamity-2012-solar-storms/#.U-Xn8fmSx8E

Picture 1

If the Body Is a Machine, Can It Be Maintained Indefinitely?

To Aubrey de Grey, the body is a machine. Just as a restored classic car can celebrate its hundredth birthday in peak condition, in the future, we’ll maintain our bodies’ cellular components to stave off the diseases of old age and live longer, healthier lives.Dr. de Grey is cofounder and Chief Science Officer of the SENS Research Foundation and faculty at Singularity University’s November Exponential Medicine conference—an event exploring the healthcare impact of technologies like low-cost genomic sequencing, artificial intelligence, synthetic biology, gene therapy, and more.Recently speaking to participants in Singularity University’s graduate studies program, de Grey said the greatest challenge in aging research today is less of a technical nature, more a misguided focus in the mainstream.Most approaches to age-related disease aim to manage symptoms. They have contributed to longer life expectancy and eased complications, but because treatments interfere with the body’s finely tuned systems, they can have nasty side effects and are ultimately powerless (even with advances) to reverse age-related illness. Why?“Aging is a side effect of being alive in the first place," says de Grey. Metabolic processes drive the day-to-day business of living, but they also inevitably cause cellular damage. The body’s range of self-repair mechanisms don’t take care of everything. Eventually, a lifetime of accumulated damage causes the familiar signs of aging like “thinning skin, cloudy eyes, muscles sapped of strength, heart disease, and cognitive decline." De Grey is known for his research into engineered negligible senescence. Negligible senescence is a term used to describe certain animals that don’t display symptoms of aging. De Grey believes we can use biotechnology to engineer negligible senescence in humans, and he cofounded the SENS Research Foundation to lead the way.SENS focuses on seven categories of universal damage that contribute to aging.These seven include cell loss and decreasingly effective cell replacement in the muscles, heart, and brain; uncontrolled cell division (cancer); the accumulation of toxins produced by damaged mitochondria (cellular power plants); the accretion of malfunctioning cells; the stiffening of tissues (like those in our arteries); and the accumulation of extracellular and intracellular junk.There’s a lot to know about each category, but all have a few things in common—they’re natural byproducts of a body in motion; they’re mostly harmless in small quantities; if left unchecked, accumulated damage results in a range of ailments.Image of a breast cancer cell taken with a scanning electron microscope.De Grey says we’ve known about the prime culprits of aging for decades, and after years of research, haven’t turned up many more. Now the challenge is using the knowledge to take action.According to de Grey, there are already generic treatments to address all seven categories of aging either in development or believed to be feasible based on current research.Examples include stem cell therapies to reverse cell loss and Alzheimer’s drugs to clear amyloid plaques in the brain. Though these areas are well-funded, not all areas attract as much financial backing. De Grey’s organization focuses on neglected areas of research. (Go here for more detail on past and ongoing SENS research initiatives.)Of course, tackling aging is a massively ambitious cause. But de Grey is a driven individual. A trained computer scientist, he got into the field because age-related illness is universal—a promised suffering that everyone will one day endure.Along the way, de Grey’s faced his share of controversy.In 2005, MIT Technology Review placed a $20,000 bounty on papers refuting the scientific merit of SENS. A panel of judges appointed to read submissions concluded none succeeded in disproving de Grey, but neither had de Grey convincingly defended SENS. Rather, as a whole, it remained an intriguing but not fully tested hypothesis.Mainstream skepticism isn’t terribly surprising. Even when research into age-related disease has significant backing and shows promise, progress is slow, and major setbacks are common. Repeated failures in clinical trials of Alzheimer’s drugs to clear amyloid plaques, for example, have been disheartening. While the drugs do get rid of plaques, they haven’t been shown to improve cognitive function or significantly slow decline. De Grey says you can’t expect dramatic results by working in only one area—you’ve got to work in them all. For example, in addition to plaques outside cells, Alzheimer’s is also thought to result from intracellular protein debris called tangles. Further, by the time patients exhibit behavioral symptoms, significant fractions of the brain have already died.Ultimately, SENS has more to prove. But that’s the way of science. Hypothesis is followed by research and evidence. Nathan Myhrvold, former Microsoft CTO and a judge on the Tech Review panel, put it beautifully when he said:“We need to remember that all hypotheses go through a stage where one or a small number of investigators believe something and others raise doubts…while most radical ideas are in fact wrong, it is a hallmark of the scientific process that it is fair about considering new propositions; every now and then, radical ideas turn out to be true. Indeed, these exceptions are often the most momentous discoveries in science."Though de Grey’s foundation has attracted experts in research institutes around the world, the approach has struggled to raise as much funding as they’d like. Funding, not technology has been the biggest bottleneck, according to de Grey. But recently, longevity research has seen growing interest and resources dedicated to it. The May 2013 issue of National Geographic with an infant on the cover declared, “This Baby Will Live to Be 120—and It’s Not Just Hype." Others organizations are taking up the cause. Google-funded longevity research firm, Calico, for example, or Craig Venter’s new firm, Human Longevity, Inc. which aims to build a database of tens of thousands of human genomes and their related phenotypes—or the observable characteristics of gene expression in individuals—to discover the genetics of aging.Eventually, the burden of proof may well reach critical mass, begetting a virtuous cycle of investment and breakthrough. Or maybe aging will prove a persistently difficult nut to crack. De Grey favors the former forecast, and he’s breakthrough-agnostic.“I’d love to be scooped," he says.Within the next three decades, de Grey believes there’s a 50/50 chance we’ll have marshaled the necessary therapies to add 30 years to life expectancy.As a natural byproduct of these enabling technologies, those extra years ought to be healthier and more active. And that will only be the beginning. Those who live 30 more years will be alive to see additional advances add more years, and so on.What of the social, economic, and environmental implications of people living dramatically longer lives? De Grey says simply, “I have no idea." A common error we make worrying about the future is assuming nothing changes between now and then.No one will be celebrating their 200th birthday for at least a hundred years, and a lot can happen in a century. The best we can do is keep our eyes on the prize. And for de Grey, the prize is clear, “I don’t want us to go on dying the way we always have."If de Grey has his way, one day we’ll regularly visit the rejuvenation clinic to “replace, remove, repair, and reinforce" bodily systems damaged by the business of living.We’ll not only live longer, we’ll do it in style.Learn from de Grey and other leading experts how emerging technologies will revolutionize medicine and healthcare at Singularity University’s Exponential Medicine conference (San Diego, November 9-12). Singularity Hub readers applying with discount code 500HUB before August 31st receive $500 off General Participant tickets.Image Credit: Shutterstock.com; Bùi Linh Ngân/Flickr; NIH/National Cancer Institute/Wikimedia Commons http://singularityhub.com/2014/08/03/on-the-road-to-the-fountain-of-youth/

Picture 1

Life beyond Earth seems 'inevitable', US planetary scientist says

Astronomers are standing on a 'great threshold' of space exploration, Dr Sara Seager says A globular cluster in the Milky Way, which contains about 100bn stars. Photograph: F. Ferraro/AFP/Getty Images Astronomers are standing on a "great threshold" of space exploration that could see evidence of extra-terrestrial life being discovered in the next 20 years, an expert has claimed. Life beyond the Earth seems inevitable given the immensity of the universe, says US planetary scientist Dr Sara Seager.In the coming decades chemical fingerprints of life written in the atmospheres of planets orbiting nearby stars could be found by the next generation of space telescopes.Writing in the journal Proceedings of the National Academy of Sciences, Seager, from the Massachusetts Institute of Technology (MIT), said: "We can say with certainty that, for the first time in human history, we are finally on the verge of being able to search for signs of life beyond our solar system around the nearest hundreds of stars."Astronomers now know that statistically every star in our galaxy, the Milky Way, should have at least one planet, and small rocky worlds like the Earth are common."Our own galaxy has 100bn stars and our universe has upwards of 100bn galaxies – making the chance for life elsewhere seem inevitable based on sheer probability," said Seager.In the next decade or two, a handful of "potentially habitable" exoplanets will have been found with atmospheres that can be studied in detail by sophisticated space telescopes.The first of these "next generation" telescopes will be the American space agency Nasa's James Webb Space Telescope (JWST) due to be launched in 2018.It will be able to analyse the atmospheres of dozens of "super-Earths" – rocky planets somewhat larger than Earth – including several that could harbour life.Studying a planet's atmosphere for signs of life involves capturing starlight filtering through its gases.Different elements absorb different wavelengths of light, providing information about the atmosphere's make-up.Living things, from bacteria to large animals, are expected to produce "biosignature" gases that could be detected in a planet's atmosphere. They include oxygen, ozone, nitrous oxide, and methane.The problem faced by scientists is that some of these, such as methane, can be generated by geological processes as well as life.The likelihood of "false positives" could be reduced by searching for rarer biosignature gases more closely tied to living systems, such as dimethyl sulphide (DMS), and methanethiol, said Seager.But she pointed out that observations using telescopes such as the JWST, which will focus on backlit "transiting" planets that happen to passing in front of their parent stars, will be limited.Maximising the chances of finding evidence of extraterrestrial life will require a technological leap to methods of directly imaging large numbers of exoplanets.Such an undertaking is daunting, given that directly imaging an Earth-like exoplanet is equivalent to picking out a firefly in the glare of a searchlight from a distance of 2,500 miles.Yet two techniques now under development could make direct imaging of Earth twins possible.One involves specialised optics to block out interfering starlight and reveal the presence of orbiting exoplanets. The other is the "starshade" – an umbrella-like screen tens of metres in diameter placed tens of thousands of kilometres in front of a space telescope lens.The starshade is designed to cast a shadow blocking out light from a star while leaving a planet's reflected light unaffected."To be confident of finding a large enough pool of exoplanets to search for biosignature gases, we require the ability to directly image exoplanets orbiting 1,000 or more of the nearest Sun-like stars," said Seager.She added: "We stand on a great threshold in the human history of space exploration. If life is prevalent in our neighbourhood of the galaxy, it is within our reach to be the first generation in human history to finally cross this threshold and learn if there is life of any kind beyond Earth."http://www.theguardian.com/science/2014/aug/04/extra-terrestrial-life-inevitable-planetary-scientist-astronomers

Picture 1

Beyond developing current missions, NASA's job is to invest in technologies that

Beyond developing current missions, NASA's job is to invest in technologies that might seem like sci-fi—but could hold the keys to the next generation of space missions. After all, a century ago, the idea of a lunar landing probably seemed almost like pure sensational fiction. And this week, NASA picked the five seemingly distant concepts it wants to study further.NASA's Innovative Advanced Concepts Program, or NIAC, is the organization in charge of parsing and selecting which concepts, from researchers and universities and even independent companies, should receive backing from NASA. It's awarded more than $23 million to hundreds of ideas over the years, and now, it's released the names of its next five Big Ideas. Let's take a look.A Mothership That Deploys Hedgehog RoversThis Stanford-developed concept is designed to help NASA explore small solar system bodies. Here's how it would work: A mothership that stays in space would deploy a smaller robotic craft onto a small planet, moon, or even asteroid. Expand Each rover pod, which the team nicknamed "hedgehogs" thanks to their stabilizing spikes, contains three flywheels that help them complete three different kinds of movement to explore these unknown bodies. First, they'll be able to hop over long distances, thanks to an altitude control system—here's a GIF of the team's prototype:They can tumble too. And finally, they'll be able to fly like regular spacecraft. The hedgehogs would help the mothership learn and map smaller, unstable bodies without actually landing on them.Orbiting RainbowsNASA's storied Jet Propulsion Laboratory came up with this extraordinary idea: To build a massive optical system in space using huge clouds of dust particles. The cloud, which would be shaped by pressure, would form the aperture in the imaging system—magnifying the target so that NASA could see distant objects in space at a high resolution. Expand Why not just use a regular optical system launched from space? Well, because those are heavy and fragile. This strategy would be far easier to build and maneuver in space.A Telescope Carried By a Sub-Orbital Balloon Expand NASA already uses balloons to see into space, like the BLAST balloon-borne telescope project seen above, which revealed "half of the Universe's starlight." But a researcher at the Steward Observatory in Tucson imagines taking this concept further, launching a balloon more than 30 feet wide into sub-orbit. The huge balloon would act as a reflector for the telescope inside, making it easier to image objects in space.Looking Inside Asteroids Using Subatomic ParticlesThomas H. Prettyman, a scientist at the Planetary Science Institute, wants to use subatomic particles like muons, generated when cosmic ray collide with objects, to actually peer inside these objects. The idea is to be able to look closely at asteroids and comets that are near Earth—and it's easy to imagine why.You could use this technology to, say, learn more about what minerals are inside an asteroid for potential mining purposes. Or, it could give scientists a clear picture of the size and makeup of an object that might be on a collision course for Earth, helping to generate a strategy to knock it off course. Those two scenarios are still fictional, but this technique would also be helpful in the present, since it would give us so much information about interplanetary objects that we can't currently access.Image: Asteroid Ida via NASA.A Better Alternative To Telescopes For Long Space MissionsIt's easy to see why there are so many imaging ideas on this list: As humankind ventures further into space, having better optical systems to observe space around us will be absolutely essential. This concept, by S.J. Ben Yoo at the University of California, Davis, is designed to replace traditional, bulky telescopes on space missions. Expand The design for this Low-Mass Planar Photonic Imaging Sensor involves packing "millions of direct detection white-light interferometers densely packed onto photonic integrated circuits," rather than using bulky traditional systems. According to the research team, their design "enables exciting new NASA missions since it provides a large-aperture, wide-field EO imager at a fraction of the cost, mass and volume of conventional space telescopes."Read more about these concepts on NASA's site here. Lead image: Mission 40 launch via NASA.http://gizmodo.com/nasa-just-bet-big-on-these-five-extraordinary-ideas-1618172356

Picture 1

Secrets of the Water Bear, the Only Animal That Can Survive in Space

Water Bear (Tardigrade), a tiny aquatic invertebrate, magnified x250 when printed at 10 centimetres wide.The microscopic tardigrade—also known as the water bear—is the only animal that can survive the cold, irradiated vacuum of outer space. We talked to leading tardigrade researchers to find out what makes these little guys so amazing.What’s so special about tardigrades?Tardigrades are a class of microscopic animals with eight limbs and a strange, alien-like behavior. William Miller, a leading tardigrade researcher at Baker University, says these creatures are remarkably abundant. Hundreds of species "are found across the seven continents; everywhere from the highest mountain to the lowest sea," he says. "Many species of tardigrades live in water, but on land, you find them almost everywhere there’s moss or lichen." In 2007, scientists discovered that these microscopic critters can survive an extended stay in the cold, irradiated vacuum of outer space. A European team of researchers sent a group of living tardigrades to orbit the earth on the outside of a FOTON-M3 rocket for ten days. When the water bears returned to Earth, the scientists discovered that 68 percent lived through the ordeal. Wait, what? How is that possible? Although (as far as we know) tardigrades are unique in their ability to survive in space, Miller insists that there is no reason to believe they evolved for this reason or—as a misleading VICE documentary has implied—that they are of extraterrestrial origin. Rather, the tardigrade’s space-surviving ability is the result of a strange response they’ve evolved to overcome an earthly life-threatening problem: a water shortage. Land-dwelling tardigrades can be found in some of the driest places on earth. "I’ve collected living tardigrades from under a rock in the Sinai desert, in a part of the desert that hadn’t had any record of rain for the previous 25 years," Miller says. Yet these are technically aquatic creatures, and require a thin layer of water to do pretty much anything, including eating, having sex, or moving around. Without water, they’re about as lively as a beached dolphin. But land-dwelling tardigrades have evolved a bizarre solution to living through drought: When their environment dries up, so do they. Tardigrades will enter a state called desiccation, in which they shrivel up—losing all but around 3 percent of their body's water and slowing their metabolism down to an astonishing 0.01 percent of its normal speed. In this state, the tardigrade just persists, doing nothing, until it’s inundated with water again. When that happens, the creature pops back to life like a re-wetted sponge and continues onward as if nothing had happened. What’s even more astonishing is that tardigrades can survive being in this strange state for more a decade. According to Miller, a few researchers believe some species of tardigrades might even be able to survive desiccation for up to a century. Yet the average lifespan of a (continuously hydrated) tardigrade is rarely longer than a few months. "It sounds quite strange," says Miller, "that even though these tardigrades only live for a few weeks or months, that lifetime can be stretched over many, many years." How does being dried out protect them from the vacuum of space? In its desiccated state, the tardigrade is ridiculously, almost absurdly resilient. Laboratory tests have shown that tardigrades can endure both an utter vacuum and intense pressures more than five times as punishing as those in the deepest ocean. Even temperatures up to 300 degrees Fahrenheit and as low as -458 degrees F (just above absolute zero) won’t spell the creature's doom. But the exact source of its resilience is a mystery, says Emma Perry, a leading tardigrade researcher at Unity College in Maine. "In general, we know very little about how this species functions, especially when we’re talking about the molecular level." There are clues. Scientists have learned that when the tardigrade enters its desiccated state, "it replaces some of its cell contents with a sugar molecule called trehalose," Perry says. Researchers believe this trehalose molecule not only replaces water, but also in some cases can physically constrain the critter’s remaining water molecules, keeping them from rapidly expanding when faced with hot and cold temperatures. This is important, because expanding water molecules (like what happens when you get frostbite) can mean instant cellular death for most animals. What about space radiation? Space is deadly, and not just because of the vacuum. Outside our protective atmosphere there is killer radiation caused by distant supernovae, our sun, and other sources. Space radiation comes in the form of harmful charged particles that can imbed in the body of animals, ripping apart molecules and damaging DNA faster than it can be repaired. But here too, the tardigrade seems oddly prepared for life in space. According to Peter Guida, the head of NASA’s space radiation laboratory, one of the biggest radiation concerns for astronauts (and space-bound tardigrades) is a set of molecules called reactive oxygen species. Ionizing radiation enters the body and bores into wayward molecules that contain oxygen. In simple terms, those newly irradiated molecules then troll through the body causing all sorts of harm. Tardigrades during their desiccated state produce an abnormal amount of anti-oxidants (yes, these actually exist outside the health-food world), which effectively neutralize those roaming, evil reactive oxygen species. Partly because of this talent, tardigrades have been found to withstand higher radiation doses with far greater success than researchers would otherwise believe they should. The reason that tardigrades would have evolved to survive high radiation doses is a mystery, too. However, Miller points to a leading theory: Perhaps tardigrades evolved to be swept up by the wind and survive in the earth’s atmosphere—which would explain not only their hardiness but also that they’re found the world over. However it happened, there’s still much, much more to learn about these fascinating creatures.http://www.popularmechanics.com/science/space/secrets-of-the-water-bear-the-only-animal-that-can-survive-in-space-17069978

Picture 1

Mars or bust: the new space race to put humans on the red planet

An artist’s impression of a Mars One settlement. The colonists would spend the rest of their lives on the red planet. Image: Bryan Versteeg/mars-one.com The race is on to land people on Mars. But it’s not just Nasa taking part. Guest blogger Zahaan Bharmal reviews the competitors in this amazing dash for the red planet Two years ago tomorrow, a nuclear-powered rover, the size of an SUV and weighing almost a tonne, was lowered onto the surface of Mars. Touching down ever so gently, Nasa’s Curiosity landed with an almighty roar.It sent a message to the world that a new space race – a race to eventually set foot on Mars – was well under way.There are still many years, many missions, many things to try, fail and try again before Nasa completes this race. And the costs are likely to be in the hundreds of billions of dollars. But the Curiosity mission represented a crucial stepping-stone towards Nasa’s eventual goal of a manned mission to Mars in 2035.It is, however, not alone in its ambition. The last time Nasa raced to space, its rival was the former Soviet Union – a giant superpower competing for political and military superiority in the cold battlefield of space.This time the competitors are a lot smaller but no less tenacious. Over the past few years a diverse and eclectic group of nonprofit and for profit organisations have emerged, all with their sights set on sending humans to Mars.And crucially, they all think they can get there much faster and much cheaper than Nasa. Here is a brief look at three of the teams racing to Mars.Inspiration Mars – “The flyby"Who are they? A non-profit founded by Dennis Tito, the first ever space tourist.What’s their big idea? A flyby. They plan to orbit Mars but never actually land.Is there any point going to Mars if you’re not going to land? There’s a lot to learn from the journey. They want to know more about how humans deal with long periods in space, both mentally and physically.Sounds fun. When do we leave? Sadly, no Brits allowed. The crew is going to be all American, a man and a woman, preferably married.OK, when does the happy couple leave? In 2018. That’s when the Earth and Mars will align, offering a rare orbit that will allow a craft to make the round trip in only 501 days, which will save on fuel. And in 2018 there will also be the lowest levels of solar radiation for over a decade. So less need for the SPF 30,000 sun cream.That’s only four years from now. What if they are not ready? There’s a plan B. In 2021 there will be another rare orbit, this time involving a slingshot around Venus on the way to Mars taking 589 days.A flyby sounds cheaper than landing. How much will this all cost? Between $1bn and 2bn is the estimate, funded largely by Tito.Mars One – “The one way trip"Who are they? A non-profit founded by Dutch entrepreneur, Bas Lansdorp.What’s their big idea? A one-way trip to Mars.Wait, no return trip home? Yup. Once you’re there, you’re there. Their plan is to eventually establish a permanent human settlement.Sounds like a commitment. How many people would be up for that? Quite a few, it seems. Mars One recently invited members of the public to apply to be their first astronauts and received 2,782 applications, over 100 of which were from the UK.And how many will actually get to go? So far, just over 700 have been invited for an interview. From there, a selection processes will whittle down the list to six groups of four astronauts.When do they leave? The first crew of four will leave in 2024. After that, crews of four will leave every two years.How much is it all going to cost? An estimated $6bn plus the cost of supporting the astronauts for as long as they live on Mars.Where are they getting the money from? They plan to document the entire journey on a reality TV show, which they hope will largely finance the expedition.Space X – “The round trip"Who are they? A private space exploration company founded by Elon Musk.He was the guy that inspired Tony Stark’s character in Iron Man, right? Yup. Genius, billionaire, playboy, philanthropist.What’s his big idea? A fleet of reusable rockets, launch vehicles and space capsules to transport humans to Mars and back again.Why does he want to go to Mars? Musk’s ultimate vision is for humanity to become a multi-planetary civilisation. He wants to build a self-sustaining city on Mars of up to 80,000 people, so that in the event of a catastrophe on Earth, we have somewhere else to go.When do they leave? Their aim is to put a human boot on Mars by 2026.Who would get to go? Unclear at this stage but Musk may well be on one of the first rockets himself.How much will this all cost? The total costs are unclear. But Musk thinks he may be able to charge the average person as little as half a million dollars for a round trip to Mars.You mean, half a BILLION dollars? Nope, it’s not a typo. Half a million dollars.Time will tell how these and other organisations will fare. But, in my view, one fact is beyond question: the ambition to send a human to Mars is the right one.The moon landing was arguably the most important achievement in the history of humanity. Putting human footprints on Mars would be an even greater accomplishment. More than any other celestial ambition, Mars promises the answers to some of our most profound questions, none more so than whether life exists elsewhere in the universe.It was a race that got us to the moon 45 years ago. And it is this current race – this amazing race – that will help us one day get to Mars.Zahaan Bharmal works for Google and recently won Nasa’s Exceptional Public Achievement Medal for YouTube Space Lab. This article was written in a personal capacity.Page 2 Rubber duck-shaped Comet 67P/Churyumov-Gerasimenko does not produce enough gravity to hold Rosetta in orbit. Photograph: Esa This morning, a thruster burn brought Rosetta into “orbit" around its target comet, signalling the start of its main science phase. The spacecraft will now track comet 67P/Churyumov-Gerasimenko for a year, following it through its closest approach to the sun to monitor how the extra heating affects the icy surface.Nothing about this mission is ordinary, and these are no ordinary orbits. The weirdly shaped comet, which some have likened to the shape of a rubber duck, does not produce enough gravity to fully hold the spacecraft. Instead, the flight team will “drive" the spacecraft in triangular-shaped orbits, gradually lowering the altitude from today’s 100km to around 30km.One of the most audacious space missions in decades, it is designed to reveal secrets about the origin of the solar system, the origin of our home planet and even the origin of life.From a PR point of view, today’s triumphant rendezvous stands in stark contrast to the last time the European Space Agency mounted a comet mission. That was back in 1986. The mission was called Giotto and the comet was the famous one named after Edmond Halley, the 18th century astronomer who predicted its return.Back then, the venerable TV astronomer Patrick Moore and James Burke conducted a live programme to bring the first pictures of the comet to the British public as they were received. Moore was in Darmstadt, Germany, where Esa’s mission control is located, and Burke was in London.Viewers were promised the first ever view of the nucleus of a comet – the mountainous iceberg that evaporates to give the ghostly tails for which comets are famous.The snag was that when the pictures arrived they were incomprehensible. They were garishly colour-coded, and not even the experts could interpret them. No one knew at the time the beauty of the comet that would be revealed by subsequent analysis.Then, just around the time of closest approach, the signal from the spacecraft disappeared. The guests were left with no recourse but to speculate about whether Giotto could have been destroyed by the dust and debris coming off the nucleus. It makes uncomfortable, embarrassing watching even today.But, if the stories are true, the damage was much worse than a few red faces. It is now a piece of space folklore that UK prime minister Margaret Thatcher was watching. She was so appalled by what she perceived to be a total waste of money that she began a severe tightening of the UK’s purse strings where Esa and space research were concerned.That has now turned around completely with the UK being a committed investor in the space sector. The other thing that has turned around is the way science is communicated to the public.This morning there were no red faces; just fantastic images, fantastic data and the promise of more fantastic science to come. Read the official statement from Esa here.Stuart Clark is the author of Is There Life on Mars? (Quercus)http://www.theguardian.com/science/across-the-universe/2014/aug/05/mars-space-race-humans-red-planet

Picture 1

Is Time Travel Possible?

A scientific paper recently published by a pair of physicist Doctor Who fans suggests that time travel via a TADRIS-like timeship may be scientifically possible.According to the paper, the geometry of spacetime in which the Doctor’s fictional TARDIS travels may actually exist in our own universe. The authors, Ben Tippett and David Tsang, say that this geometry allows for the possibility of travel in all directions of space and time. Despite the seriousness of the topic, From Quarks To Quasars points out that the pair decided to have a bit of fun with their Doctor Who-related paper.Claiming to work at the Gallifrey Polytechnic Institute and the Gallifrey Institute of Technology, the two are actually employed as theoretical physicists at Earth-bound institutions. Tsang earned his degree at Cornell and currently works at McGill University, while Tippett teaches at the University of British Columbia. Together, they named their time-travel paper “Traversable Achronal Retrograde Domains In Spacetime," which, despite the intentional nod, refers to their concept of spacetime curves.Tsang and Tippett posit that in order for TARDIS-like time travel to be achieved, the fabric of spacetime, which although massively complex is essentially constructed out of three dimensions of space and one of time, must include closed timelike curves. As The Inquisitr has previously reported, closed timelike curves are a highly controversial feature in modern physics, which potentially allow for time travel in concert with the laws of relativity. The TARDIS proposed in Tippett and Tsang’s paper would create one of these timelike curves, which would provide a conduit of sorts to move through space and time, much like the time vortex that is depicted in Doctor Who.While time travel may be theoretically possible, there are unfortunately several drawbacks to the researchers’ TARDIS concept. In order to function, it requires the use of exotic matter, which has yet to be shown to exist in our universe. Time travel would also have to violate classical mechanics, and in order to move in anything other than a circular direction in both time and space, more than one TARDIS curve would need to be constructed, raising the possibility of exiting the time vortex into a universe of anti-matter.For those who are interested in the concept of time travel but don’t have the academic background to deal with the daunting mathematics, Tsang and Tippett have authored The Blue Box White Paper, a document aimed at explaining their TARDIS concept to scientific laypeople.[Image via Bitbillions]http://www.inquisitr.com/1399699/whovian-physicists-claim-tardis-time-travel-is-possible/

Picture 1

A Journey to an Amazing Earth-like Planet!

As of July 2014, astronomers have discovered 1,739 confirmed exoplanets and 452 multiplanet solar systems. There are also an additional 4,234 unconfirmed planets detected by the NASA Kepler observatory. According to their sizes, the confirmed planets include 107 that are Earth-sized. About 47 of all confirmed planets are located in the habitable zones of their stars -- the distance from a star where liquid water might pool on the surface.Let's take a look at one of the most intriguing ones that is both Earth-sized and in the habitable zone of its star. It's called Kepler-186f, which means its the fifth planet (b, c, d, and e are the others) orbiting the star Kepler-186.Kepler-186f (Credit: NASA Ames/SETI Institute/JPL-Caltech)Known only by its catalog number, it has no name yet, but we know quite a bit about it already. It orbits a dim red star about 490 light-years from our Sun, in the constellation Cygnus. It has a year that is 130 Earth days long, at a distance similar to Mercury's from our own sun. Yet even from this proximity, at its surface the planet only gets about one third the sunlight that Earth does. If you were to set up a rooftop solar panel system to create electricity, the panels would have to occupy nine times the area of a similar system on Earth just to get the same amount of power.The planet is actually located near the outer edge of its star's habitable zone, under conditions much like our own planet Mars. Enough sunlight heats its surface that liquid water could exist. Kepler-186f is about 11-percent larger in radius than Earth and has a mass about 1.44 times that of Earth. This means that if you weigh 150 pounds on Earth, you would weigh about 175 pounds there as you watched the dim red sun Kepler-186 in the sky.Kepler-186f is far enough from its star that gravity probably would not have locked the planet's rotation so that it would always have the same side facing its star. But the planet would probably still rotate much more slowly and have a longer day than Earth -- perhaps weeks long. If it has an atmosphere as dense as Earth's with a hint of carbon dioxide, it could be a comfortable world, with surface temperatures above 32 degrees Fahrenheit. But with a thicker atmosphere like Venus, it could also be a deadly 500-degree-Fahrenheit hellhole! We just don't know yet.This rocky planet probably has a molten outer core like our Earth, so it can generate a protective magnetic field, but because the planet rotates so slowly, the field is much weaker than Earth's. This means that any atmosphere it may have could eventually leak away as the solar wind from its star slowly strips away the atmosphere over time. Some volcanism could help replenish it, so it's an interesting race between two opposing processes to sustain an atmosphere over the eons.Because its star is a red dwarf, the star emits very little ultraviolet light, so unlike on Earth, there may not exist a thick ozone layer. More importantly, there is no source of energy to drive organic chemistry leading to life on its surface. Sufficient energy might exist in underwater vents like those found deep in Earth's oceans, but this greatly reduces the volume of water where chemistry could lead over time to the first replicating molecules. It could be an ocean world frozen forever at the cusp of creating the first living systems, but without enough energy from sunlight on the planetary scale to push the right sequence of reactions over the threshold where larger replicating molecules like RNA and DNA could form.This artist's impression shows sunset from yet another Earth-like planet called Gliese 667 Cc. (Credit: Wikipedia/Calcada)Meanwhile, the star also hosts four additional planets (186b, 186c ,186d and 186e), though they are all closer to the star than Kepler-186f and are too hot to have liquid water. These four planets have lost the gravitational sweepstakes and are tidally locked, which means the same side of each planet always faces the star, in perpetual daylight. Surface temperatures are well above 200 degrees F all the time, under a blistering red sun that always stays in the same spot in the sky. With atmospheres, the temperatures could be even higher!The New FrontierThere are still thousands of other candidate planets that we know about, so the hunt for more twins to our own Earth continues. The biggest new challenge is to detect and measure the atmospheres of these planets to search for biosignature compounds like ozone and oxygen. Once those are discovered, we will know with certainty that some kind of life exists there, but because distances are so great, we will never be able to see what kinds of life have emerged: bacteria or dinosaurs? That remains the biggest frustration in the search for life beyond our solar system.Given that it's so far from Earth, we will never be able to send a space probe to visit any of these intriguing worlds. The travel time by conventional chemical or ion rockets would take millions of years, so we will have to study them using telescopes. Knowing that these habitable planets exist and may even bear life but never knowing its precise form will remain a source of great frustration for future humanity.The kind of human civilization needed to attempt star travel, whether robotically or with humans, does not exist, nor are there any historical templates for what such a human civilization needs to look like to make this kind of centuries-long dedication happen. Not even the single-minded toiling of the ancient Egyptians with their pyramids, or the Druids with their Stonehenge, serves as a guide. Perhaps only the recognition of a planetary, extinction-level event will galvanize humanity to become the kind of self-sacrificing civilization that can embark upon these kinds of journeys, despite the fact that not all humans can be saved by the effort.But for the time being, we can still gaze through our telescopes at these intriguing orbs, keep careful records of the habitable and life-supporting worlds we find, and wonder at the amazing possibilities they imply.http://www.huffingtonpost.com/dr-sten-odenwald/a-journey-to-an-amazing-earth-like-planet_b_5655548.html

Picture 1

NASA Selects Proposals for Advanced Energy Storage Systems

The Scarab lunar rover is one of the next generation of autonomous robotic rovers that will be used to explore dark polar craters at the lunar south pole. The rover is powered by a 100-watt fuel cell developed under the Space Power Systems Project under Game Changing Development program. Supported by NASA, the rover is being developed by the Robotics Institute of Carnegie Mellon University. Image Credit: Carnegie Mellon UniversityNASA has selected four proposals for advanced energy storage technologies that may be used to power the agency's future space missions.Development of these new energy storage devices will help enable NASA's future robotic and human-exploration missions and aligns with conclusions presented in the National Research Council's "NASA Space Technology Roadmaps and Priorities," which calls for improved energy generation and storage “with reliable power systems that can survive the wide range of environments unique to NASA missions." NASA believes these awards will lead to such energy breakthroughs."NASA's advanced space technology development doesn't stop with hardware and instruments for spacecraft," said Michael Gazarik, associate administrator for Space Technology at NASA Headquarters in Washington. "New energy storage technology will be critical to our future exploration of deep space -- whether missions to an asteroid, Mars or beyond. That's why we're investing in this critical mission technology area." A model of a 3-kilowatt fuel cell that could be used for deep space power applications. Image Credit: Managed by the Game Changing Development Program within NASA's Space Technology Mission Directorate, the four selected technology proposals are:-- Silicon Anode Based Cells for High Specific Energy Systems, submitted by Amprius, Inc, in Sunnyvale, California -- High Energy Density and Long-Life Li-S Batteries for Aerospace Applications, submitted by the California Institute of Technology in Pasadena -- Advanced High Energy Rechargeable Lithium-Sulfur Batteries, submitted by Indiana University in Bloomington -- Garnet Electrolyte Based Safe, Lithium-Sulfur Energy Storage, submitted by the University of Maryland, College ParkPhase I awards are approximately $250,000 and provide funding to conduct an eight-month component test and analysis phase. Phase II is an engineering development unit hardware phase that provides as much as $1 million per award for one year, while Phase III consists of the prototype hardware development, as much as $2 million per award for 18 months.Proposals for this solicitation were submitted by NASA centers, federally funded research and development centers, universities and industry. NASA's Langley Research Center in Hampton, Virginia, manages the Game Changing Development program for the Space Technology Mission Directorate.NASA is working closely with the Department of Energy's Advanced Research Projects Agency (ARPA-E) and other partners to propel the development of energy storage technology solutions for future human and robotic exploration missions. Committed to developing the critical technologies needed for deep space exploration, NASA's Space Technology Mission Directorate will make significant investments over the next 18 months to address several high-priority challenges in achieving this goal.http://www.nasa.gov/spacetechhttp://www.nasa.gov/press/2014/august/nasa-selects-proposals-for-advanced-energy-storage-systems

Picture 1

The Moon Powering the Earth

The lunar dirt brought back by mankind's first moonwalkers contained an abundance of titanium, platinum and other valuable minerals. But our satellite also contains a substance that could be of even greater use to civilisation – one that could revolutionize energy production. It's called helium 3 and has been dumped on the moon in vast quantities by solar winds. Now China is looking to mine the moon for the rare helium isotope that some scientists claim could meet global energy demand far into the future. Professor Ouyang Ziyuan, the chief scientist of the Chinese Lunar Exploration Program, recently said, the moon is "so rich" in helium 3, that this could "solve humanity's energy demand for around 10,000 years at least." Helium 3, scientists argue, could power clean fusion plants. It is nonradioactive and a very little goes a very long way. The gas has an estimated potential economic value of $3 billion a ton, also making it economically viable to consider mining from the moon. Read the full article here: http://www.dailymail.co.uk/sciencetech/article-2716417/Could-moon-fuel-Earth-10-000-years-China-says-mining-helium-satellite-help-solve-worlds-energy-crisis.html

Picture 1

Eternal Youth Could be Found in a Worm

The nematode worm C. elegans can put itself into 'famine mode' researchers have discovered- a state where it does not age. They say a new study of the phenomenon could one day lead to a drug for humans. "It is possible that low-nutrient diets set off the same pathways in us to put our cells in a quiescent state," said David R. Sherwood, an associate professor of biology at Duke University who led the research. "The trick is to find a way to pharmacologically manipulate this process so that we can get the anti-aging benefits without the pain of diet restriction." Researchers found that taking food away from C. elegans triggers a state of arrested development: while the organism continues to wriggle about, foraging for food, its cells and organs are suspended in an ageless, quiescent state. When food becomes plentiful again, the worm develops as planned, but can live twice as long as normal. Read the full article here: http://www.dailymail.co.uk/sciencetech/article-2664199/The-key-eternal-youth-Researchers-reveal-worm-hold-arrested-development-double-lifespan-process.html?ITO=1490&ns_mchannel=rss&ns_campaign=1490

Picture 1

Using an Asteroid to get to Mars

Somewhere above the clouds, way up into the deep space of the inner solar system, there’s an asteroid tumbling near Earth with NASA’s name on it. Within the next decade or so, the space agency wants to snag the space rock and haul it to the moon. And they’ve hatched two fantastical plans to do it. One would snare an asteroid with a gigantic inflatable bag; the other might send a sticky-fingered robot out to grab a golf cart–sized boulder off an even bigger rock. Both would help humans prepare for an eventual trip to Mars. At least that’s what NASA says. But not everyone’s convinced that the plan, called the Asteroid Redirect Mission, or ARM, brings people any closer to the Red Planet. Since NASA announced the mission last year, the so-called stepping stone to Mars has sparked a bristling debate. Since the plan’s debut, the space agency’s leadership has begun to explain more clearly how they see the ARM’s connection to Mars. And NASA’s not just spinning the story; the agency has been brainstorming ways to use the ARM as a launch pad for deep-space exploration. Digging into an asteroid could also be a boon. A water-rich asteroid, for example, could save NASA the trouble of toting water up from Earth. Mining asteroids could even offer astronauts shielding materials for radiation, a big health problem for humans in space. Read the full article here: https://www.sciencenews.org/article/nasa-bets-asteroid-mission-best-path-mars?mode=topic&context=36 Check out ARM on the NASA website: http://www.nasa.gov/mission_pages/asteroids/initiative/index.html#.U-Wd0khEAzw

Picture 1

Brain-Like Computer Chip

“I’m holding in my hand a chip with one million neurons, 256 million synapses, and 4096 cores. With 5.4 billion transistors, it’s the largest chip IBM has built." The above quote is from Dr. Dharmendra S. Modhaan who is raving about his long-term project- an IBM project with the goal of creating an entirely new type of computer chip, SyNAPSE, whose architecture is inspired by the human brain. This new chip is a major success in that project. “Inspired" is the key word, though. The chip’s architecture is based on the structure of our brains, but very simplified. Still, within that architecture lies some amazing advantages over computers today. For one thing, despite this being IBM’s largest chip, it draws only a tiny amount of electricity – about 63 mW – a fraction of the power being drawn by the chip in your laptop. Each core of the chip is modeled on a simplified version of the brain’s neural architecture. The core contains 256 “neurons" (processors), 256 “axons" (memory) and 64,000 “synapses" (communications between neurons and axons). This structure is a radical departure from the von Neumann architecture that’s the basis of virtually every computer today (including the one you’re reading this on.) Read the full article here: http://www.forbes.com/sites/alexknapp/2014/08/07/ibm-builds-a-scalable-computer-chip-inspired-by-the-brain/?partner=yahootix

Picture 1

"Mega-Earth" Discovered

Astronomers have recently announced that they have discovered a new type of planet - a rocky world weighing 17 times as much as Earth. Theorists believed such a world couldn't form because anything so hefty would grab hydrogen gas as it grew and become a Jupiter-like gas giant. This planet, though, is all solids and much bigger than previously discovered "super-Earths," making it a "mega-Earth." The newfound mega-Earth, Kepler-10c, circles a sunlike star once every 45 days. It is located about 560 light-years from Earth in the constellation Draco. The system also hosts a 3-Earth-mass "lava world," Kepler-10b, in a remarkably fast, 20-hour orbit. The discovery that Kepler-10c is a mega-Earth also has profound implications for the history of the universe and the possibility of life. The Kepler-10 system is about 11 billion years old, which means it formed less than 3 billion years after the Big Bang. Read the full article here: http://www.cfa.harvard.edu/news/2014-14

Picture 1

How to Collect Your Very Own Meteorites

Meteors rain down on the earth every hour of every day. Most of these are hardly larger than a grain of rice or a pea. The majority are little more than particles of dust, 10 to 40 micrometers (0.0004-0.0016 inch) in size. The average one is scarcely a quarter of the width of a human hair. The atmosphere makes short work of the larger ones. The remainder of these small meteors—-called "micrometeorites"—-are perpetually sifting down to the surface. Ten thousand tons of them every day. They fall everywhere, so that means you have the opportunity to collect them in your own backyard! In fact, French astronomer-artist, Lucien Rudaux, was one of the first to do this and made something of a hobby of it. Here is how you can do it, too. All you need is: Cookie sheet Plastic wrap Magnet Sheet of paper Magnifying glass or microscope Line the cookie sheet with the plastic wrap. Fold the edges of the wrap under the sheet, so it won't blow away. Place it outdoors in a place where nothing blocks the sky and the sheet is protected from the wind. Let the sheet remain outdoors for at least a week. When you bring it back inside, the plastic will be covered with all sorts of debris. If it has rained, it will be full of water, too. Straining the water through a sieve will help get rid of any large debris, such as leaves and bugs. Carefully run the magnet through what remains. A piece of paper wrapped over the end of the magnet will make it easier to remove whatever sticks to it. You will find some small particles sticking to the magnet. These are the remnants of meteoroids that disintegrated in the upper atmosphere. They stick to the magnet because most meteoroids have iron and nickel in them. Look at the particles through the magnifying glass or microscope. Read the full article here: http://io9.com/5984951/how-to-collect-meteorites-in-your-backyard

Picture 1

Using the Moon for Storage

Everyone needs a place to keep their stuff. Currently, the storage business is booming, but the very real possibility of running out of space is looming just around the corner. So what if we found a giant, unused wasteland where we could keep all of our extra junk? Well, some people think the Moon might be the answer. Talk about storage on the moon has taken off recently thanks to the Torah on the Moon project. The project wants to send a Sefer Torah—a Hebrew Bible scroll—to the moon, where it would be stored in an airtight case. Other artifacts could follow, project organizers say. For now, the storage of artifacts, or anything else, on the moon may be nothing more than a flight of fancy. However, experts aren’t necessarily dismissing the notion. Evolutionary biologist Richard Dawkins told New Scientist that the moon could become a “cosmic tombstone" if humans become extinct. “We should be using it to store the best humanity has ever had to offer, like the works of Michelangelo, Beethoven, Schubert and Shakespeare," Dawkins said. Read the full article here: http://www.selfstorage.com/tips/storage-on-the-moon-infographic/ Check out the Torah on the Moon project: http://torahonthemoon.com/

Picture 1

Mysterious Deep Space Radio Blast

A split-second burst of energy that erupted in deep space is giving astronomers important new clues about a mysterious class of astrophysical phenomena. Only a handful of these rapid, millisecond-duration events, known as "fast radio bursts" (FRBs), had been detected previously, all of them by a single instrument — the Parkes Observatory in Australia. As a result, some astronomers have speculated that FRBs have local origins. But the latest burst, which was observed on Nov. 2, 2012 by the Arecibo radio telescope in Puerto Rico, puts the lie to that notion. The source of fast radio bursts remains a mystery that astrophysicists are eager to solve. A number of exotic possibilities include evaporating primordial black holes, merging or collapsing neutron stars and superconducting cosmic strings. Flares from magnetically active neutron stars, known as magnetars, could also be responsible for the events. Read the full article here: http://www.space.com/26741-deep-space-radio-blasts-mystery.html

Picture 1

Iron Age Bones Undergo Bizarre Ritual

The bones of dozens of Iron Age warriors found in Denmark were collected and ritually mutilated after spending months on the battlefield, archaeologists say. At least six months after the soldiers died, their bones were collected, scraped of remaining flesh, sorted and dumped in a lake. Some were handled in a truly bizarre manner; for instance, four pelvises were found strung on a stick. The bones appear to belong to soldiers who suffered a tragic defeat in battle. The remains are riddled with markings that suggest cuts from spears and axes, but what is really strange is that they were left out in the open for such a long time before the ritual began. The bones have been gnawed on by animals, most likely wolves and dogs. After this time, someone collected the corpses and sorted at least some of the bones by type. Marks of cutting and scraping suggest the bones were separated deliberately, and that they had any remaining flesh removed. Animal sacrifices and ceramic pots mixed in with the remains suggest some sort of religious ritual, Holst said. Along with the pelvises strung like beads on a stick, there is evidence that leg bones and thighbones were sorted, too. There are examples of ritual treatment of defeated enemies in what is now France, Switzerland and England in the centuries prior to this find, but nothing like it has ever been seen in Denmark or the surrounding areas. Read the full article here: http://www.huffingtonpost.com/2014/08/03/iron-age-bones-discovered_n_5641667.html?utm_hp_ref=science

Picture 1

Zombie Star

According to NASA, astronomers using the Hubble Space Telescope have found a star system 110 million light years away that might have created a “zombie star" during a Type Iax supernova - the smaller, dimmer, less common variety of star explosion. A burned-out white dwarf star was sucking energy from its healthy blue companion star, feeding off of it until -- boom! Supernova. Often, this explosion of the dwarf star is a cataclysm, reducing it to smithereens. But not always. There's a less common and less destructive type of supernova whose discovery was announced only last year. In this cosmic event, the dwarf star survives, albeit "battered and bruised," a shadow of its former self -- or, as NASA puts it, a zombie star. Read full article here: http://www.latimes.com/science/sciencenow/la-sci-sn-nasa-hubble-zombie-star-20140806-story.html Here's what NASA's saying: http://www.nasa.gov/press/2014/august/nasa-s-hubble-finds-supernova-star-system-linked-to-potential-zombie-star/

Picture 1

Over 90% of DNA Could Be Worthless

For decades, scientists have known that the vast majority of the genome is made up of DNA that doesn't seem to contain genes or turn genes on or off. The thinking went that most of this vast terrain of dark DNA consisted of genetic parasites that copy segments of DNA and paste themselves repeatedly in the genome, or that it consists of the fossils of once useful genes that have now been switched off. Researchers coined the term junk DNA to refer to these areas. But in recent years, researchers have debated whether "junk" might be a misnomer and if this mysterious DNA might play some role. A massive project called ENCODE, which aimed to uncover the role of the 3.3 billion base pairs, or letters of DNA, in the human genome that don't code for proteins, found that in test tubes, about 80 percent of the genome seemed to have some biological activity, such as affecting whether genes turn on. Whether that translated to any useful or necessary function for humans, however, wasn't resolved. In order to test the theory completely, the DNA of a carnivorous plant know as a bladderwort was observed. The bladderwort had about 28,500 genes, not much different from plants of similar type and complexity. The difference was in the junk: The bladderwort plant seemed to have stripped out a vast amount of noncoding DNA. Yet the plant did just fine without that material. This led scientists to believe the same could be true in human DNA. Read the full article here: http://www.foxnews.com/science/2013/05/13/junk-dna-mystery-solved-it-not-needed/ Check out the ENCODE website: http://www.genome.gov/10005107

Picture 1

Rosetta to Rubber Ducky Comet

After a 10-year, 4-billion-mile journey through deep space, a European probe will finally arrive at its comet destination this week. The European Space Agency's Rosetta spacecraft is scheduled to rendezvous with Comet 67P/Churyumov-Gerasimenko on Wednesday (Aug. 6). If all goes according to plan, Rosetta will on that day become the first probe ever to orbit a comet — and, in November, the first to drop a lander onto the surface of one of these icy wanderers. After the first images of the comet had been sent back to earth, immediately people compared the shape to that of a rubber ducky. And can you blame them? The 1.3-billion-euro ($1.75 billion at current exchange rates) Rosetta mission blasted off in March 2004, kicking off a long and circuitous journey through the solar system. The probe has swung around the sun five times and zoomed past Earth on three separate speed-boosting flybys, European Space Agency (ESA) officials said. The rendezvous operation actually consists of 10 different maneuvers, which began in early May and will conclude with a final engine burn on Wednesday. These moves will end up slowing Rosetta's speed relative to 67P from 1,790 mph (2,880 km/h) at the end of the probe's hibernation to 2 mph (3 km/h) — walking speed — at the time of orbital insertion. Read the full article here: http://www.huffingtonpost.com/2014/08/04/rosetta-comet-rendezvous_n_5647709.html?utm_hp_ref=science Check out the ESA blog here: http://blogs.esa.int/rosetta/

Picture 1

The Impossible Spacecraft

In a quiet announcement that has sent shockwaves through the scientific world, Nasa has cautiously given its seal of approval to a new type of “impossible" engine that could revolutionize space travel. In a paper published by the agency’s experimental Eagleworks Laboratories, Nasa engineers confirmed that they had produced tiny amounts of thrust from an engine without propellant – an apparent violation of the conservation of momentum; the law of physics that states that every action must have an equal and opposite reaction. Nasa’s engineers have tested an engine known as a ‘Cannae Drive’, a machine that instead uses electricity to generate microwaves, bouncing them around inside a specially designed container that theoretically creates a difference in radiation pressure and so results in directional thrust. All this might be theoretical no more however. Nasa’s scientists tested a version of the drive designed by US scientist Guido Fetta and found that the propellant-less engine was able to produce between 30 and 50 micronewtons of thrust – a tiny amount (0.00003 to 0.00005 per cent of the force of an iPhone pressing down when held in the hand) but still a great deal more than nothing. Read the full article here: http://www.independent.co.uk/news/science/nasa-approves-impossible-space-engine-design-that-apparently-violates-the-laws-of-physics-and-could-revolutionise-space-travel-9646865.html

Picture 1

Your Pet's Journey to the Stars

Celestic Inc. is a company that gives you the opportunity to send the cremated remains of your dead loved ones into space. However, recently they've added the option to send the remains of your pet on a celestial journey as well. Because pets are a part of the family too, right? “Because your pet loved to explore," is the company’s tagline, which continues with: “Honor your best friend with a journey to the stars on board the world’s first pet memorial spaceflight service." You can send a small portion of your pet's ashes or a lock of hair into space with several different packages. Your pet's remains can be sent on a suborbital spaceflight, an orbit around the earth, or on a trajectory that takes them to the moon or beyond. Read the full article here: http://news.discovery.com/space/private-spaceflight/one-small-step-for-your-dead-pet-140801.htm?utm_medium=referral&utm_source=pulsenews#mkcpgn=rssnws1 Check out Celestis Pets: http://www.celestispets.com/#!historic-animals-/c116y

Picture 1

You Can Name a Planet

The International Astronomical Union (IAU), which oversees planetary nomenclature, recently invited the public to submit and vote on names for recently discovered exoplanets and their parent stars. The invitation comes almost a year after the IAU reversed its official stance on whether to bother naming exoplanets in the first place (Kepler is finding them at a pretty rapid clip these days) and whether the public should be included in the naming process. You can head over to the NameExoWorlds website and register to submit your own name for a planet. All the names submitted will be reviewed and voted upon. Then, in the summer of 2015, a special event will be held in Honolulu to announce the winners. Read the full article here: http://io9.com/you-are-now-officially-invited-to-nominate-names-for-ex-1602448946 Check out the NameExoWorlds website: http://nameexoworlds.org/

Picture 1

8,000 Year Old Skull: World's Oldest Brain?

Archaeologists in Norway have found an 8,000 year-old skull at a Stone Age site that could very well be of human origin. Remarkably, it contains a grey, clay-like substance thought to be the preserved remains of the brain. If confirmed, it could be one of the oldest human brains ever found. The skull was uncovered a the Stokke site in Vestfold, Norway. It's not known whether the skull belongs to an animal or a child. Initial tests date the skull to around 5,900 BC, making it almost 8,000 years old. Experts are being recruited to help the archaeologists confirm the exact origin of the skull. In addition to the skull, archaeologists have found numerous artifacts and a pit of carbon-rich soil containing bones. Read more here: http://io9.com/an-8-000-year-old-skull-has-been-found-with-preserved-b-1604650921

Picture 1

Exoplanet with Longest Known Year

The techniques for finding exoplanets – or planets orbiting distant stars – favor the discovery of large planets orbiting close to their stars. That’s why astronomers at the Harvard-Smithsonian Center for Astrophysics (CfA) were pleased to announce that they’ve found an exoplanet – which they call Kepler-421b – whose “year" lasts 704 days. That is the longest orbit of any exoplanet discovered via transit to date. A planet in our solar system, Mars, has a similarly sized orbit of 780 days. Meanwhile, most of the 1,800-plus exoplanets discovered to date orbit much closer to their stars than Kepler-421b and have much shorter “years." And that’s not the only thing interesting about Kepler-421b. You’ve no doubt heard of a star’s habitable zone, the zone around a star within which a planet might orbit and still have liquid water on its surface. Kepler is on the far edge of the habitable zone in its solar system, near what astronomers call the frost line. In our solar system, it’s this frost line that is said divide the orbits of the four rocky inner planets (Mercury, Venus, Earth & Mars) and the four gas giant planets (Jupiter, Saturn, Uranus & Neptune). Finding a planet near its solar system’s frost line could help shed light on the process by which solar system form. Read the full article here: http://earthsky.org/space/transiting-exoplanet-with-the-longest-known-year

Picture 1

Radiation Protection Using Magnets

The Earth is protected from cosmic radiation thanks to a magnetic field surrounding the planet. NASA scientists are attempting to use this concept to create a new form of radiation protection using super conducting magnets. It is possible that these magnets would be able to generate magnetic fields around space probes and habitats to protect them from space radiation and cosmic rays. “The concept of shielding astronauts with magnetic fields has been studied for over 40 years, and it remains an intractable engineering problem," says Shayne Westover of Johnson Space Center (JSC). “Superconducting magnet technology has made great strides in the past decade." One of the major problems, however, is the enormous amount of power it takes to generate a magnetic field. Read the article here: http://www.spacesafetymagazine.com/superconducting-magnets-protect-spacecrafts-space-radiation/

Picture 1

Reusable Rockets

Currently, SpaceX is attempting to make and test a rocket that will be reusable upon re-entering the Earth's atmosphere. SpaceX’s Grasshopper is a 10-story Vertical Takeoff Vertical Landing (VTVL) vehicle consists of a Falcon 9 first stage, a single Merlin 1D engine, four steel landing legs with hydraulic dampers, and a steel support structure. It is currently undergoing test flights at the SpaceX Rocket Development Facility in McGregor, Texas. While most rockets are designed to burn up on reentry, SpaceX rockets are designed not only to withstand reentry, but also to return to the launch pad for a vertical landing. The Grasshopper VTVL vehicle represents a critical step towards this goal. See more here: http://www.spacex.com/news/2013/03/31/reusability-key-making-human-life-multi-planetary

Picture 1

Earth-like Soil on Mars: Possible Life?

Soil deep in a crater dating to some 3.7 billion years ago contains evidence that Mars was once much warmer and wetter, says University of Oregon geologist Gregory Retallack, based on images and data captured by the rover Curiosity. NASA rovers have shown Martian landscapes littered with loose rocks from impacts or layered by catastrophic floods, rather than the smooth contours of soils that soften landscapes on Earth. However, recent images from Curiosity from the impact Gale Crater, Retallack said, reveal Earth-like soil profiles with cracked surfaces lined with sulfate, ellipsoidal hollows and concentrations of sulfate comparable with soils in Antarctic Dry Valleys and Chile's Atacama Desert. The ancient soils, he said, do not prove that Mars once contained life, but they do add to growing evidence that an early wetter and warmer Mars was more habitable than the planet has been in the past 3 billion years. Read the full article here: http://www.sciencedaily.com/releases/2014/07/140717125043.htm

Picture 1

New Lunar Crater: Possible Habitat?

What you're looking at here is not an impact crater. It's a large hole on the Moon's surface that formed when the ground above a lava tube collapsed. NASA believes these pits widen underground and contain tunnels — which would be very handy for the first wave of lunar colonists. “Pits would be useful in a support role for human activity on the lunar surface," said Robert Wagner of Arizona State University, Tempe, Arizona. “A habitat placed in a pit — ideally several dozen meters back under an overhang — would provide a very safe location for astronauts: no radiation, no micrometeorites, possibly very little dust, and no wild day-night temperature swings." Read the full article here: http://io9.com/these-intriguing-lunar-caves-could-provide-shelter-for-1607153182

Picture 1

Storing Today's Data Inside a Mountain

In the heart of the Swiss Alps, a mountain storage facility now holds the current mediums of data storage. That includes everything from floppy discs, to DVDs, to USBs. The reason why it's being stored there is for the day it all becomes obsolete. The researchers of a project called Planets, funded by the European Union, know that in a few years most of the data we have in certain types of storage will be rendered useless. All of that data will go to waste. That's why Planets created their project. The Planets team deposited a capsule deep into the heart of the Swiss Fort Knox compound, containing punch-cards, microfilm, floppy discs, audio tapes, CDs, DVDs, USB and Blu Ray media. They wanted to give the researchers of the future everything they might need to reconstruct our media and salvage our histories, regardless of how different their technological landscape looks. All of this is being stored in the Swiss Fort Knox, so that means it's pretty well guarded. It can even outlast nuclear attacks. Read the full article here: http://gizmodo.com/5542072/the-blueprint-to-all-our-data-is-hidden-inside-this-mountain-fortress Check out the Planets website: http://www.planets-project.eu/ Check out the Swiss Fort Knox site: http://www.swissfortknox.com/index.html

Picture 1

4000 Year Old Human Brain

Ancient, well-preserved human remains aren't exactly a common thing to find, which makes the discovery of a 4,000-year-old brain in western Turkey pretty noteworthy. The brain was found in 2010 in an ancient burial ground that appeared to have been burned — charred skeletons and wooden objects were found, but somehow brain tissue inside the skulls of four skeletons had been preserved. Scientists theorize that the bodies were buried due to an earthquake and therefore the fire wasn't able to burn them. However, it was able to boil their brains. Yes, the liquids inside each of the brains began to boil, drying out the tissues and lowering the presence of oxygen. This most likely was the biggest factor to the preservation of the brain tissue. Read the rest of the article here: http://www.theverge.com/2013/10/3/4798898/human-brain-preserved-for-4000-years-after-being-boiled

Picture 1

Bizarre Death Rituals

When it comes to death, all cultures across the generations have seemed to do it differently. Some death rituals are just down right bizarre. For example, did you know that in certain tribes of Papua New Guinea and Brazil the deceased person would be eaten by family and friends? It was known as endocannibalism and was actually meant to be the highest form of respect. Thankfully, it's seemed to fall out of practice. Some cultures required not only the burial of the dead person, but for someone to go with them. Viking burials and ancient Indian burials were very similar this way. Vikings would require that a female servant of the deceased person be burned in a ship with them. Not before being raped and strangled to death, that is. The ancient Indian burial custom known as Sati said that the widow of the deceased person should be burned alive so that way they could enter the afterlife together. Now, both of these rituals stated that the sacrifices went willingly, but I bet some of them had a different say in the matter. Check out even more bizarre burial rituals here: http://io9.com/5960343/10-bizarre-death-rituals-from-around-the-world

Recommended Reading

Abundance Hub: Data proving that we're living in an extraordinary time, and it's only getting better. io9.com: This blog is full of really cool articles, not only about science, but generally interesting topics. TheSpaceport.us: This blog is about everything astrological. The Planetary Society Blog: This blog of the Planetary society, run by Bill Nye and friends, posts articles about recent news in the space community. NASASpaceflight.com: This NASA blog is all about recent and upcoming spaceflights from around the world. GoogleLunarXPrize.com: This blog by Astrobotic Technologies posts all the advancements to creating lunar colonies. Bad Astronomy: Phil Plait runs this blog which is dedicated to debunking astrological myths and generally bad information

Picture 1

Turning Your Brain to Plastic

Many people and research companies have been trying to create ways to preserve the human brain in order to bring them back after death. So far it seems that this can most likely be achieved in two ways. The first way is cryonics. A great number of people have already placed themselves in cryonic suspension, but there are several foreseeable problems to bringing a person back this way. This brings me to the next method of preservation: plastination (or chemopreservation). Plastination involves chemical fixation and embedding of brain tissue in plastic for room-temperature storage. The Brain Preservation Foundation (BPF) has done extensive research on this method and has even been successful at using plasination to preserve small parts of animal brains. They plan on attempting to preserve full animals brains next. Their research involves discovering the parts of the brain that contain the human consciousness and memories. After they know this, they can find out which parts of the brain actually are "you" and preserve those to be reassembled when the technology is available. Check out this article explaining how plasination works: http://io9.com/5943304/how-to-preserve-your-brain-by-turning-it-into-plastic Here's the BPF website: http://www.brainpreservation.org/content/overview

Picture 1

Data that will Outlast the Human Race

Jeroen de Vries, a researcher at the University of Twente in the Netherlands, has developed a new optical memory device out of tungsten and silicon nitride that he says could store data safely for extremely long periods of time – up to a billion years. The device uses QR codes to store data, which have been etched into the tungsten, proving very difficult to damage. The researcher heated the storage device to a temperature of 200 °C (400 °F) for one hour and noted no visible degradation, which according to the model simulates one million years of usage. The device only showed some signs of degradation once it was heated to much higher temperatures, around 440 °C (820 °F) – but even then, the tungsten was not harmed and the data was still readable. Vries is so confident in his design, he claims that after the human race is extinguished, whatever life forms that are left will still be able to access human knowledge and culture thanks to this invention. Read the full article here: http://www.gizmag.com/billion-year-data-storage/29530/

Picture 1

Garments for the Grave

"Garments for the Grave" is a fashion label created by Australian designer Pia Interlandi. The point of this clothing line is for the clothes to decompose with the person they are buried with. Interlandi did several experiments with dead pigs and tested the decomposition rates of various fibers that the pigs were dressed in. She created a shroud-like prototype, but other designs are available and/or in the works. Read the full article here: http://news.discovery.com/tech/gear-and-gadgets/fashion-for-the-final-journey-130923.htm Check out Pia Interlandi's site here: http://www.piainterlandi.com/garments-for-the-grave/

Picture 1

Plastic Protection Against Space Radiation

One of the biggest problems for space travelers today is cosmic radiation. These harmful rays, caused by our own Sun and high energy bodies outside of the Solar System, are extremely dangerous to humans, causing things such as cancer, cellular damage, and even death. Many things have been proposed to help protect astronauts from these rays, but one of the easiest and most practical ways seems to be plastic. While aluminum usually is used for radiation protection, despite the extra weight it puts on the craft, a new form of plastic has been invented that seems to do an even better job at protecting the space craft while being less cumbersome as well. CRaTER, or the Cosmic Ray Telescope for the Effects of Radiation, has teamed up with NASA to develop Tissue Equivalent Plastic (TEP). This plastic reacts to radiation in a similar way to human muscle tissue, which proves to be an excellent way to absorb radiation. Read the full article here: http://www.universetoday.com/102885/plastic-protection-against-cosmic-rays/ Check out the CRaTER website for TEP: http://crater.sr.unh.edu/instrument_tissue.shtml

Picture 1

Darker than Black: Vantablack

Recently, a British company called Surrey NanoSystems has claimed to invented the world's darkest material. And trust me; the picture above does not do it justice. In fact, the chief technical officer of Surrey NanoSystems, Ben Jensen, has said that looking at the material is like looking into a black hole. While the material doesn't absorb all light, it is able to absorb over 99 percent of light, which is a scientific record. The nanotube material, named Vantablack, has been grown on sheets of aluminium foil by the Newhaven-based company. While the sheets may be crumpled into miniature hills and valleys, this landscape disappears on areas covered by it. Vantablack's practical uses include improving telescopes and cameras by absorbing unnecessary light. It also holds some sort of value to the military, although what it will be used for is classified as of right now. Check out the full article here: http://www.independent.co.uk/news/science/blackest-is-the-new-black-scientists-have-developed-a-material-so-dark-that-you-cant-see-it-9602504.html

Picture 1

KickSat: World's Smallest Satellite

Similar to the CubeSat concept, Cornell University student Zac Manchester has created a new type of satellite hardly bigger than a postage stamp that you can buy for a mere few hundred dollars. Over the last several years he and a few collaborators have designed, built, and tested a very tiny and inexpensive spacecraft called Sprite that can be built and launched into low Earth orbit with the help of KickSat. KickSat is a CubeSat - a standardized small satellite that can easily be launched. It is designed to carry hundreds or even thousands of Sprites into space and deploy them in low Earth orbit. The Sprites will be housed inside KickSat in several spring-loaded stacks and held in place by a lid. A radio signal transmitted from their ground station will command the lid to open, releasing the Sprites as free-flying spacecraft. Check out Zac's Kickstarter: https://www.kickstarter.com/projects/zacinaction/kicksat-your-personal-spacecraft-in-space And here's an article with more info: http://www.businessinsider.com/a-picture-of-the-smallest-earth-orbiting-satellite-ever-2014-5

Picture 1

DNA Memorial

When a love one passes away, people tend to want to have something to remember them by. At DNA Memorial, they believe that you should have a literal piece of them in your possession. This company allows people the opportunity to take the DNA of a deceased love one and turn it into a unique keepsake. DNA Memorial provides a family the last opportunity to collect the treasure of their deceased loved one's DNA; information for generations to come. Through funeral homes they collect your loved one’s DNA and their unique process of binding the DNA to a silicate provides for many options. By offering a variety of banking from home or their facility along with custom jewelry and portraits, your loved one's DNA is safe for future testing. Your loved ones DNA treasure is secure for generations with DNA Memorial. Check out their site here: http://dnamemorial.com/

Picture 1

Sunlight Powered Spacecrafts

One of the main problems with deep space exploration today is power. In order for a spacecraft to leave or get anywhere near the edge of the Solar System, it needs a tremendous amount of fuel. However, NASA and other research companies are currently looking into the possibility of powering spacecrafts using nothing more than sunlight. This technology is referred to as Solar Sails. The basic concept is that the photons from sunlight will reflect off the sails and give it the momentum to move forward. They have been in development for sometime now and several missions are being planned to launch in the near future. In fact, the Planetary Society, founded and run by Bill Nye, is planning to launch a "LightSail" mission in 2016 with the help of CubeSats. In this mission they are planning to test the capability to steer a spacecraft powered by Solar Sails. Read more info about Solar Sails here: http://science.howstuffworks.com/solar-sail.htm Read more about the upcoming mission: http://www.planetary.org/explore/projects/lightsail-solar-sailing/

Picture 1

Written History: Preserving Your Legacy

Have you ever wanted to sit down and recount the best memories of your life or your family's life and write them all down? Well, you might run into several problems after considering doing this. You may not be the most talented writer and are therefore unable to capture exactly what a moment looked or felt like. You also might not be able to think of anything particularly interesting or remarkable that is worthy of being written down. However, thanks to the help of Legacy Preservation, you are able to overcome these obstacles. Legacy Preservation is a company of authors and historians that are willing to sit down with you and your family and help you write a novel about your life. They interview you, take extensive notes, write the novel/essay/etc., and edit the entire piece for you. And of course you have the final say of what goes into the novel. You can even get the work published. I don't know about you, but I think this is pretty cool! See more for yourself on their site:http://legacypreservation.com/

Picture 1

One-way Trip to Mars

I personally never thought I would ever see someone start a colony on Mars during my life time. However, it seems like that is exactly what is going to happen. An organization called Mars One is currently putting together a project to build and sustain a colony on Mars. You didn't misread that. This is actually happening. Start CrewSelection" hash="6029eb25882453ff5ba5f85ab5c41dd4" type="image/jpeg"/> This picture is from a press conference held in 2013 where they announced open application for anyone who wanted to join the mission. Of course they couldn't accept all applicants, but 24 were to be selected from the large group. These everyday people will begin their training in 2015 and it will last until around 2024. The entire mission is estimated to be completed in 2026, after they have set up a base on Mars and confirmed that it is able to sustain life. And of course, this mission is one-way. No coming back. Check out their site: http://www.mars-one.com/mission/roadmap

Picture 1

Frozen in Time: Cryonics

The idea of being frozen for thousands of years, only to wake up in the distant future, has been explored in many works of fiction, such as Star Trek and Futurama. However, it is something that is actually being pursued. The act of freezing oneself right after clinical death is called Cryonics. The main hope in Cryonics is to preserve the brain that still has several minutes of activity after death. It is theorized that if the brain is preserved, then someday someone will discover a way to reanimate the body. If you think that something like this is impossible, you might be surprised to know that over 100 people have been frozen with the help of Cryonics. Sure the process is long and arduous, but many people believe the benefits and scientific possibilities far outweigh the difficulty of the task. However, Cryonics is by no means cheap. It can cost up to $100,000 to freeze and store someone. Here's an article explaining Cryonics: http://science.howstuffworks.com/life/genetic/cryonics1.htm Here's a couple of companies that provide Cryonic services: http://www.alcor.org/; http://www.cryonics.org/

Picture 1

Crystalizing Data: Quartz Storage

One of the main problems with preserving data is that the storage mediums tend to go out of date fairly quickly. How long do you think it's going to be when we are unable to read VCR tapes or floppy disks? All of the information stored on them will be lost forever. This is why many people are trying to create something that can store data for millennia and can be read no matter what the era. The electronics giant Hitachi, partnered with Kyoto University's Kiyotaka Miura, has claimed to make such a device: “semiperpetual" slivers of quartz glass that Hitachi says can preserve information for hundreds of millions of years with virtually no degradation. Read the full article here: http://www.scientificamerican.com/article/data-saved-quartz-glass-might-last-300-million-years/

Picture 1

DIY Space Missions: CubeSats

As technology develops and becomes more and more advanced, space exploration is becoming easier and things considered science-fiction only a couple years ago could now be quite feasible. That being said, the general public seems to want to get in on the action. Luckily for them a personal satellite is now available for people who wish to send their very own spacecraft on interstellar missions. CubeSat is a company in league with NASA that will send a collection of the devices into space in order to collect data. All data collected by your satellite is yours to do with as you please. A CubeSat is a small satellite in the shape of a 10 centimeter cube and weighs just 1 kilogram. That’s about 4 inches and 2 pounds. The design has been simplified so almost anyone can build them and the instructions are available for free online. CubeSats can be combined to make larger satellites in case you need bigger payloads. Deployable solar panels and antennas make Cubesats even more versatile. The cost to build one? Typically less than $50,000. They might be small but you can do a lot with them. Including…Taking Pictures from space, Send radio communications, Perform Atmospheric Research, Do Biology Experiments and as a test platform for future technology. Check out the official CubeSat site:http://www.cubesat.org/ And here's an article with more information: http://www.diyspaceexploration.com/what-are-cubesats/

Picture 1

The CataCombo Sound System

It has been said that music is a medium whose influence is so strong it transcends several plains of existence. At CataCombo Sound Systems they take that very seriously. The CataCombo Sound System allows a someone to place an extremely hi-tech sound system into their coffin so that they may listen to some of their favorite music even in the grave. The entire system consists of three parts: the CataPlay, CataTomb, and CataCoffin. You can manage the coffin's playlist online with the help of Spotify. The system also utilizes the coffin's unique acoustics in order to provide a rich sound. Fredrik Hjelmquist, the company's founder and CEO of Pause, has already committed to be being buried in one of these coffins. In fact, you can go to the company's website right now and choose a song a to be uploaded to his playlist called "Pause-4ever." Whether or not you think it was a smart idea to give this kind of power to the internet is up to you. Visit the website and check it our for yourself: http://www.catacombosoundsystem.com/#addspotify:track:36DyCBroDt1FjKM7zRaEOw

Picture 1

Micro Engraved Devotional Jewelry

HD-Rosetta™ archival preservation technology utilizes unique microscopic processes to provide analog and/or digital data, including information as texts, line illustrations or photos on nickel plates. When applied to Jewelry - Sacred or classic texts can be mastered in HD-Rosetta™ format and replicated in 24 carat gold on a polymer support. MORE INFORMATION: http://www.norsam.com/rosetta.html

Picture 1

Introducing DNA Capsule

YOUR DNA IS YOU, NOW AND FOREVER When you store your DNA, you lock a piece of yourself away. Your DNA gives future generations all there is to know about you and could help them make important life choices. If you are interested in preserving your DNA head over to www.dnacapsule.com and make it happen.

Picture 1

Do you long for the Colonial era?

East India Company is now accepting applications from prospective colonialists! Well I am kidding about the East India Company, but it is easy to imagine that a new colonial era is likely in the future. One of the most Earth-like planets in the galaxy has been discovered 'a stone's throw away' Gliese 832c is a super-Earth located in the 'Goldilocks zone' of a solar system 16 light years away Astronomers have discovered an alien planet that could offer some of the most Earth-like conditions seen to date in the galaxy. FULL ARTICLE HERE: http://www.independent.co.uk/news/science/one-of-the-most-earthlike-planets-in-the-galaxy-has-been-discovered-a-stones-throw-away-9572491.html