Fire Forces Evacuation Of Famous Camp And Village As Wildfire Season Sets In For A Long Stay

A 30,000 acre wildfire is now burning through the Philmont Scout Ranch shown here outside Cimarron, New Mexico. Both Philmont and the village have been evacuated. The fire comes after one of the driest winters in memory for northern New Mexico. (AP Photo/Mike Dreyfuss)

We saw the first sign of the looming catastrophe that is now bearing down on a beloved small town nestled where the plains meet the Rockies months ago here in the nearby Sangre de Cristo mountains.

As of Sunday, Cimarron, New Mexico is a ghost town with mandatory evacuations in place thanks to the 30,000 acre wildfire sending plumes of choking smoke into skies that have been clear and blue for much of this spring so far. Ash falls on the deserted streets rather than the much-needed rain that might have prevented the wildfire, which sparked to life early Thursday in the forest to the west of town.

But really, we knew it would have taken biblical April and May showers to prevent this from happening. Instead, we’ve had weeks of winds. Nerve-wracking, moisture-sucking gusts whipping down the mountains and across the already crispy plains.

The signs conditions were ripe for an epic fire began to mount in January. The first avalanche safety training of the season, scheduled to take place near the roof of New Mexico at Taos Ski Valley was cancelled due to a severe lack of snow. As in almost none. There was little threat of even lackadaisical snowball  fights this winter, let alone avalanches.

Another month went by and few flurries flew, leading to the cancellation of the second training session.

By February, USDA SNOTEL snowpack reporting stations in the Sangres are typically measuring multiple feet of snow at various spots between 10,000 and 13,000 feet of elevation. This year, most of those stations were returning error messages by mid-February due to a dearth of anything to measure. Rather than piles of snow, only whisps of dry grass and parched pine trees surrounded the automated stations.

Now, as the summer season starts, the state of New Mexico consists of millions of acres of that same dry grass and wood. One of the driest winters in living memory has transformed the Land of Enchantment into a tinderbox. For weeks now we’ve simply been waiting for the spark we knew was inevitable.

Two weeks ago, rafting a popular section of the Rio Grande Gorge near Taos required a few hours to float just a few miles due to the river’s extremely low flow. A bathtub ring-like stain along the huge volcanic boulders lining the riverbed revealed the despairing disparity compared to last spring when nearly ten times as much water was flowing through the canyon.

Further downriver, the Rio Grande is already running dry south of Albuquerque. This is not nearly normal this early in the year.

So yea, we knew this was coming. It was just a question of exactly when and where.

But really, the signs this was coming have been etched on the wall for much longer. Not with words, necessarily, but instead with a more clear and succinct symbol: a hockey stick. Not a real hockey stick, and not really a symbol either, but a real representation of a very real reality. This is the hockey stick I’m talking about.

IPCC/Penn State

The famed “hockey stick” diagram showing the dramatic rise of global temperatures on average in recent years.

The hockey stick tells us the world is getting warmer. It tells us the southwest is getting drier. It has made climate cycles more erratic and extreme. So, this has been a long time coming. No. Actually, it’s been in process for a while now, but things are about to intensify again.

Maybe it’s not again. After all, the hockey stick blade has been growing ever longer in recent years; only the aim of its slapshot changes.

Now, we suppose, it is our turn to be the target. The signs have been visible here for months.

Today the Ute Park Fire is bearing down on Cimarron, home to heaps of Old West history, one of the world’s most famous haunted hotels and the Philmont Scout Camp where millions of memories have been made. Our heads are filled involuntarily with visions of tragedy burned into multiple California landscapes over the past 12 months.

Our distaste of population density like that seen on the coasts may save us from the epic amounts of damage seen in places like Santa Rosa last year, but it won’t be any less devastating.

Wildfire is no longer just a season here; it is a new way of life in the high desert. We are living not on the razor’s edge, but on the ever-lengthening blade of a hockey stick.

Amazon’s Alexa Has A Clear Favorite – and Some Savage Analysis – for the NBA Finals

Amazon’s Alexa voice assistant is a handy, seamless way to listen to music and find out about the weather. As the NBA finals head into tonight’s Game 2 between the Golden State Warriors and the Cleveland Cavaliers, though, the voice assistant is also dabbling in sports analysis.

If you ask Alexa “Who will win the NBA Finals this year,” it gives you the following dissertation:

“Even with both conference finals going to game 7, these playoffs were over before they even started. I think the Warriors will win the playoffs pretty handily, and the rest of the league will spend the off-season trying to figure out what they will do to damper the dynasty.”

Yes, savage. You’d be forgiven for thinking that Alexa is showing some bias – the Warriors’ home base in Oakland is much closer than the Cav’s HQ to both Amazon’s Seattle headquarters and to Silicon Valley, which you might call Alexa’s spiritual home. But Alexa’s stance is also shared by most NBA analysts (and, if the memes are any indicator, LeBron himself).

Get Data Sheet, Fortune’s technology newsletter.

Of course, it’s deeply misleading to say that “Alexa” has any opinions at all. While the voice assistant incorporates an array of what are known as “limited” or “weak” artificial intelligence functions, such as search and natural language processing, it doesn’t have any more opinions, emotions, or sports analysis skills than your laptop (or, for that matter, your refrigerator). Those are the realm of human-like “general” A.I., which we won’t see for nearly 20 years, at the very least.

That becomes clear if you ask Alexa a more nuanced or specific question. Ask “Alexa, who will win Game 2 of the NBA finals?” and you get the same spiel about the series as a whole. Ask “Who will be NBA MVP this season?” and the machine draws a blank. Ask “Who will be MVP of the NBA Playoffs?” and you’ll be treated, for some reason, to a summary of Game 1.

Most likely, the scripted pro-Warriors response was plugged in manually by Amazon’s Alexa team. The Game 1 report that Alexa spits out in response to almost any other Finals-related query might have been scraped from news feeds by a more automated process, similar to the way Alexa finds and reads the news or stock reports.

Fortune has reached out to Amazon for more details about their creation’s anti-Cleveland bias. But don’t worry – Alexa won’t be replacing Jeff Van Gundy on the mic anytime soon.

Questioning Truth, Reality, and the Role of Scientific Progress

It’s an interesting time to be making a case for philosophy in science. On the one hand, some scientists working on ideas such as string theory or the multiverse—ideas that reach far beyond our current means to test them—are forced to make a philosophical defense of research that can’t rely on traditional hypothesis testing. On the other hand, some physicists, such as Richard Feynman and Stephen Hawking, were notoriously dismissive of the value of the philosophy of science.

Quanta Magazine


author photo

About

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

That value is asserted with gentle but firm assurance by Michela Massimi, the recent recipient of the Wilkins-Bernal-Medawar Medal, an award given annually by the UK’s Royal Society. Massimi’s prize speech, delivered earlier this week, defended both science and the philosophy of science from accusations of irrelevance. She argues that neither enterprise should be judged in purely utilitarian terms, and asserts that they should be allies in making the case for the social and intellectual value of the open-ended exploration of the physical world.

In addition to serving as a defender of the value of science, Massimi investigates issues surrounding “realism” and “anti-realism”: how, if at all, science relates to an objective reality. Her work asks whether the process of science approaches a singular, true conception of the world, or whether it is content with simply describing physical phenomena, ignoring any sense of whether the stories it tells about the world are true. Massimi, Italian-born and currently based at the University of Edinburgh in Scotland, comes down on the side of the realists, and argues, in a position she calls “perspectival realism,” that science can make progress—a much-contested word in philosophy—despite being inevitably shaped by social and historical factors. Quanta caught up with Massimi as she prepared to deliver her prize lecture. An edited and condensed version of the interview follows.

Richard Feynman is often quoted as saying that the philosophy of science is of much use to scientists as ornithology is to birds. How do you defend it?Dismissive claims by famous physicists that philosophy is either a useless intellectual exercise, or not on a par with physics because of being incapable of progress, seem to start from the false assumption that philosophy has to be of use for scientists or is of no use at all.

But all that matters is that it be of some use. We would not assess the intellectual value of Roman history in terms of how useful it might be to the Romans themselves. The same for archaeology and anthropology. Why should philosophy of science be any different?

What use, then, is philosophy of science if not for scientists themselves? I see the target beneficiary as humankind, broadly speaking. We philosophers build narratives about science. We scrutinize scientific methodologies and modeling practices. We engage with the theoretical foundations of science and its conceptual nuances. And we owe this intellectual investigation to humankind. It is part of our cultural heritage and scientific history. The philosopher of science who explores Bayesian [statistical] methods in cosmology, or who scrutinizes assumptions behind simplified models in high-energy physics, is no different from the archaeologist, the historian or the anthropologist in producing knowledge that is useful for us as humankind.

Many scientists in the early 20th century were deeply engaged with philosophy, including Einstein, Bohr, Mach and Born. Have we lost that engagement?Yes, I think what we have lost is a distinctive way of thinking about science. We have lost the idea, dating back to the Renaissance and the scientific revolution, that science is part of our broader cultural history.

In the early 20th century, the founding fathers of relativity theory and quantum mechanics were trained to read philosophy. And some of the most profound debates in physics at that time had a philosophical nature. When Einstein and Bohr debated the completeness of quantum mechanics, what was at stake was the very definition of “physical reality”: how to define what is “real” in quantum physics. Can an electron be ascribed “real” position and “real” momentum in quantum mechanics even if the formalism does not allow us to capture both? This is a profound philosophical question.

It is hard to find similar debates in contemporary physics, for many reasons. Physicists these days do not necessarily read other subjects at university or get trained in a broad range of topics at school. Large scientific collaborations enforce a more granular level of scientific expertise. More to the point, the whole ethos of scientific research — reflected in institutional practices of how scientific research is incentivized, evaluated, and research funding distributed — has changed. Today, science has to be of use to a well-identified group, or it is deemed to be of no use at all.

But just as with philosophy, we need fundamental research in science (and in the humanities) because it is part of our cultural heritage and scientific history. It is part of who we are.

One criticism made is that science moves on, but philosophy stays with the same old questions. Has science motivated new philosophical questions?I think that again we should resist the temptation of assessing progress in philosophy in the same terms as progress in science. To start with, there are different views about how to assess progress in science. Is it defined by science getting closer and closer to the final true theory? Or in terms of increased problem-solving? Or of technological advance? These are themselves philosophical unsolved questions.

The received view up to the 1960s was that scientific progress was to be understood in terms of producing theories that were more and more likely to be true, in the sense of being better and better approximations to an ideal limit of scientific inquiry—for example, to some kind of theory of everything, if one exists. With the historical work of Thomas Kuhn in the 1960s, this view was in part replaced by an alternative that sees our ability to solve more and more problems and puzzles as the measure of our scientific success, regardless of whether or not there is an ideal limit of scientific inquiry to which we are all converging.

[embedded content]

Philosophy of science has contributed to these debates about the nature of scientific success and progress, and as a result we have a more nuanced and historically sensitive view today.

But also the reverse is true: Science has offered to philosophers of science new questions to ponder. Take, for example, scientific models. The exponential proliferation of different modeling practices across the biomedical sciences, engineering, earth sciences and physics over the last century has prompted philosophers to ask new questions about the role and nature of scientific models and how they relate to theories and experimental evidence. Similarly, the ubiquitous use of Bayesian statistics in scientific areas has enticed philosophers to go back to Bayes’ theorem and to unpack its problems and prospects. And advances in neuroscience have invited philosophers to find new accounts of how the human mind works.

Thus, progress accrues via a symbiotic relation through which philosophy and the sciences mutually develop, evolve and feed into each other.

You say there has been a debate between realist and anti-realist views of science. Can you explain this?The debate has a long history, and it is fundamentally about philosophical stances on science. What is the overarching aim of science? Does science aim to provide us with an approximately true story about nature, as realism would have it? Or does science instead aim to save the observable phenomena without necessarily having to tell us a true story, as some antirealists would contend instead?

The distinction is crucial in the history of astronomy. Ptolemaic astronomy was for centuries able to “save the observable phenomena” about planetary motions by assuming epicycles and deferents [elaborations of circular motions], with no pretense to give a true story about it. When Copernican astronomy was introduced, the battle that followed—between Galileo and the Roman Church, for example—was ultimately also a battle about whether Copernican astronomy was meant to give a “true story” of how the planets move as opposed to just saving the phenomena.

We can ask exactly the same questions about the objects of current scientific theories. Are colored quarks real? Or do they just save the empirical evidence we have about the strong interaction in quantum chromodynamics? Is the Higgs boson real? Dark matter?

You have argued for a new position, called perspectival realism. What is that?I see perspectival realism as a realist position, because it claims (at least in my own version of it) that truth does matter in science. We cannot be content with just saving the observable phenomena and producing theories that account for the available evidence. Yet it acknowledges that scientists don’t have a God’s-eye view of nature: Our conceptual resources, theoretical approaches, methodologies and technological infrastructures are historically and culturally situated. Does that mean we can’t reach true knowledge about nature? Certainly not. Does it mean we should give up on the idea that there is an overarching notion of scientific progress? Absolutely not.

You have written about the role of evidence in science. This has become a hot topic because of the efforts in some parts of physics to push into realms for which there is scant evidence that might be used to test theories. Do you think true science can be done even where empiricism is not (at this point) an option?This is an important question because, as I mentioned, the answer to the question of how to be a realist despite the perspectival nature of our knowledge depends also on how we go about collecting, analyzing and interpreting evidence for hypothetical new entities (which might or might not be real). Not only is such evidence very difficult to gather in areas like cosmology or particle physics, but also the tools we have for interpreting the evidence are very often a matter of perspective. And so how we put those tools to the service of “finding the truth” about, say, supersymmetric particles or dark energy becomes crucial.

Take, for example, the research program on supersymmetry. Here, the old philosophical ideas—that scientists start with a theoretical hypothesis, deduce empirical consequences and then run an experiment to test whether the consequences are verified or not—proves totally out of date and inadequate to capture what goes on in real scientific practice. It would be too time-consuming and inefficient for experimental physicists to test every single theoretical model produced in supersymmetry, considering also the wealth of data coming from colliders.

Instead, particle physicists have devised more efficient strategies. The goal is to rule out energy regions where no evidence has yet been found for new physics beyond the Standard Model. Our ability to survey the space of what is physically conceivable as a guide to what is objectively possible—and to fix more stringent constraints on this realm of possibilities—counts as progress, even if no particle were to be detected at the end of all those efforts.

From a philosophical point of view, what has dramatically changed is not simply old ideas about the interplay between theory and evidence, but, more importantly, our ideas of progress in science and realism. Progress here is not just about discovering a new particle. It is also—indeed, most of the time—being able to carve out the space of what might be possible in nature with high confidence. That is progress enough. Conveying this message to the public is important to rectify misconceptions about, say, whether taxpayers’ money should be spent to build more-powerful colliders if these machines don’t actually discover a new particle.

At the same time, our realist commitments should be reconsidered. I personally believe that a realist viewpoint can include our ability to carve out the space of what might be objectively possible in nature, rather than in terms of mapping onto some actual states of affairs. This is what perspectival realism is driving at.

How did you start thinking about all of this?A turning point for me happened one day in 1996 when I was browsing through dusty old issues of Physical Review in the basement of the physics library at the University of Rome. There I bumped into the famous Einstein-Podolsky-Rosen paper of 1935 [“Can quantum-mechanical description of physical reality be considered complete,” the first paper to point to the phenomenon now called quantum entanglement]. I was struck by the “criterion of physical reality” that featured on their first page—if without in any way disturbing a system, we can predict with certainty the value of a physical quantity, then there exists an element of physical reality corresponding to this physical quantity. I wondered why a physics article would start by asserting a seemingly very philosophical claim about “physical reality.” Anyway, I thought, what is a “criterion” of physical reality? And is this one justified? I remember then reading Niels Bohr’s response to that EPR paper, which chimed in my mind with more modest, knowledge-based claims about how we come to know about what there is in the world. And I decided at that point that there was a philosophical treasure trove in this area, waiting for me to explore.

Your prize address at the Royal Society is about the value of science. What do you think philosophy can bring to that discussion?A lot! Obviously it is not the job of philosophers to do science, or to give verdicts on one theory over another, or to tell scientists how they should go about their business. I suspect that some of the bad press against philosophers originates from the perception that they try to do these things. But I believe it is our job to contribute to public discourse on the value of science and to make sure that discussions about the role of evidence, the accuracy and reliability of scientific theories, and the effectiveness of methodological approaches are properly investigated.

In this respect, I see philosophy of science as delivering on an important social function: making the general public more aware of the importance of science. I see philosophers of science as public intellectuals who speak up for science, and rectify common misconceptions or uninformed judgments that may feed into political lobbies, agendas and ultimately policy-making. Philosophy of science is an integral part of our public discourse on science, which is why I have always endeavored to communicate the value of science to society at large.

Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.

PaaS platforms are dead, thanks to IaaS providers

I can see it coming soon in the obituaries:

Some time in 2017 or 2018, at a date that is much in dispute, private and public PaaS died of neglect. While once a part of the NIST definition of cloud computing, PaaS lived a wonderful life in the early days of cloud computing as a place to build new cloud-based applications. Standards were formed around PaaS, and even today upon its death, PaaS is surrounded by many friends and family. PaaS has been survived by public IaaS platform services that offer better and more versatile development tools.

These days, much of what PaaS provides, including quick and easy development tools and quick ops deployment, have been replaced by IaaS providers. Public IaaS clouds, such as Amazon Web Services, now offer features such as container-based development, serverless computing, and machine learning and analytics that have made a feature-rich IaaS platform the best place to build and deploy cloud-based applications.

What also interesting to note is that most major public IaaS cloud providers also provide a PaaS as well.

What’s happed is combination of choice and momentum. Developers, who are these days mostly charged with the migration of application workloads to the public IaaS clouds, have to avoid PaaS. This is because PaaS clouds typically require adherence to specific programming models, languages, databases, and platforms. Thus, while PaaS is good for new cloud-based applications, you can’t easily fit some traditional LAMP-based applications into a PaaS platform. To do so means major rewriting, cost, and risk. So, bye-bye, PaaS. 

The initial PaaS momentum was around the explosion of platform services that are now a large part of IaaS clouds. These services, along with platform analogs for migrated applications, are now all on the same IaaS platforms. What is more, these platforms provide state-of-the-art cloud security, as well as operational services such as management, monitoring, and business continuity and disaster recovery. In short, today’s IaaS platforms provide the Pass features that PaaS platforms provide plus the PaaS features that PaaS providers never provided.

Of course, technology never actually dies; it morphs into other technology, and I suspect the same will happened with PaaS. All the big IaaS providers still maintain a PaaS; indeed, some were built upon an initial PaaS offering and quickly pivoted to IaaS cloud services to keep up with the market.      

However, the concept of PaaS is really what has died. The notion that a public cloud service was a platform for the building and deployment of cloud-based applications, and thus would be the preferred way of doing so, just has not survived public IaaS platforms.

Who would have thought that IaaS platforms would become more than just storage and compute services? Not the PaaS providers. But that’s exactly what has happened.

World's top wealth fund backs activist proposals at Facebook meeting

OSLO (Reuters) – Norway’s $1 trillion wealth fund backed a wide range of activist shareholder proposals at Facebook’s annual meeting on Thursday, including a measure to improve the company’s oversight of questionable content, the fund’s voting record showed on Friday.

FILE PHOTO: A 3D-printed Facebook like button is seen in front of the Facebook logo, in this illustration taken October 25, 2017. REUTERS/Dado Ruvic/Illustration/File Photo

Facebook (FB.O) has been under scrutiny from regulators and shareholders after it failed to protect the data of some 87 million users that was shared with now-defunct political data firm Cambridge Analytica.

Norges Bank Investment Management, which runs the world’s largest sovereign wealth fund, backed six shareholder proposals at the meeting in Menlo Park, California. Facebook’s management opposed all the measures.

The Norwegian fund, which owns 1.4 percent of all globally listed shares, held a 0.71 percent stake in Facebook at the end of 2017, worth $3.64 billion, according to fund data.

One of the six measures would have mandated that Facebook report on “fake news” controversies. The others involved gender pay gaps, responsible tax principles, establishing a board committee on risk management and two measures requiring Facebook to adopt simple majority voting at shareholder meetings.

The social media giant on Thursday said each of the six measures had been voted down, but it did not provide a final tally of votes. The company said that would be made available at a later time.

Reporting by Terje Solsvik, editing by Larry King

GM's Self-Driving Fleet Gets $2.25 Billion Capital Infusion From SoftBank Ahead of 2019 Launch

, I write about industrial innovation and the global auto industry

GM

GM Cruise self-driving vehicles will be deployed at scale in 2019. (Photo by Karl Nielsen)

SoftBank Vision Fund will invest $2.25 billion in GM Cruise Holdings, the automaker’s self-driving car unit, to help commercialize GM’s autonomous vehicle technology in a large-scale fleet starting next year.

GM is also investing $1.1 billion in GM Cruise, strengthening its commitment to bringing self-driving cars to market in a big way.

“Our Cruise and GM teams together have made tremendous progress over the last two years,” said GM Chairman and CEO Mary Barra. “Teaming up with SoftBank adds an additional strong partner as we pursue our vision of zero crashes, zero emissions and zero congestion.”

SoftBank Vision Fund, the world’s largest technology investment fund, is a major player in the fast-developing field of autonomous vehicles and will now own 19.6 percent of GM Cruise.

“GM has made significant progress toward realizing the dream of completely automated driving to dramatically reduce fatalities, emissions and congestions,” said Michael Ronen, managing partner of SoftBank Investment Advisers. “The GM Cruise approach of a fully integrated hardware and software stack gives it a unique competitive advantage. We are very impressed by the advances made by the Cruise and GM teams, and are thrilled to help them lead a historic transformation of the automobile industry.”

GM President Dan Ammann said the automaker is excited to join forces with a tech leader that shares GM’s view of how AV technology will change the world.

The deal is subject to regulatory approval. SoftBank Vision Fund will invest $900 million when the deal closes, and the remaining $1.35 billion when Cruise AVs are ready for commercial deployment, expected in 2019.

Follow me on Twitter @JoannMuller

Determining The Return On Investment On A New Software Purchase

Senior Vice President of OrderInsite, delivering executive leadership in innovative pharmacy technology solutions. Connect with me.

Shutterstock

There’s no way around it: Companies often need the most modern software. Yet software costs money, something most companies don’t have excess amounts of lying around.

To make purchases more appealing to stakeholders and decision makers, you may need to calculate the software’s return on investment. However, this is a complicated calculation — it’s important to get it right so you can rest assured your company will be investing its money wisely.

Don’t make a poor estimate that might damage your stakeholders’ trust in you or even result in a bad decision. Follow these guidelines to learn how to calculate the return on investment for your software purchase in a straightforward, accurate way.

What Is Return On Investment?

A return on investment, or ROI, isn’t an abstract term. It’s a specific calculation of an investment’s cost versus its benefit. ROI is always calculated the same way, whether it’s for software or anything else.

The formula used to calculate ROI is as follows:

ROI = (Gain of Investment) – (Cost of Investment) / (Cost of Investment)

Let’s break down the two components of this calculation, one at a time, and consider how they relate to software purchases in the health care or pharmacy fields.

Gain Of Investment

Your gain of investment is the amount of money you stand to gain from implementing the new software system. In some lines of work, the gain is easy to calculate. For instance, if a retail store opens an online storefront, it will almost certainly see an increase in its sales.

In the pharmacy or health care business, you won’t see a clear increase in revenue. What you’re more likely to see is a decrease in costs.

For instance, many health care providers and pharmacies have obligations to regulators. If they don’t adhere to regulations, they may end up with a fine. Many software packages offer safeguards to make sure companies adhere to all regulatory requirements, thus reducing the likelihood of these fines. The money you don’t pay out in fines would be a gain in investment.

Likewise, many software packages have features that will help you do the work you already do more efficiently. The time you save, and the extra work you’re able to take on as a result, represents another part of the money you gain.

Finally, remember that for most companies, your new software will replace an old system. That old system cost you money, whether it’s through licensing fees for the software you used or the smaller costs associated with an old-fashioned pen-and-paper record system. You can factor the money you’re not spending on the system into the gain of investment.

Study: Eating Meals Earlier In The Day Can Cut Diabetes Risk And Lower Blood Pressure

Shutterstock

In our ongoing dieting dialogue we spend a lot of time talking about what to eat, but what if we’re leaving out something just as important? What if changing when we eat could significantly improve our health? For the first time, a study offers hard data supporting precisely that argument, showing that eating earlier in the day could affect our health as much as what we’re eating.

Animal studies have found that time-restricted diets can reduce diabetes risk by stabilizing blood sugar. To see if the same holds true for humans, a research team from the University of Alabama at Birmingham (UAB) recruited a group of overweight men, all nearly diabetic, to participate in a controlled 10-week study. Half of the group ate three meals a day within a six-hour period starting around 6:30 am and ending by 3 pm (in effect, they fasted for 18 hours a day). The other half ate three meals during a typical 12-hour day. The groups swapped eating regimens at the end of the first five weeks.

By the end of the study, it was clear that eating within a six-hour window versus a 12-hour window produced three big benefits. First, the participants’ insulin sensitivity increased, resulting in better blood sugar control (insulin is the hormone that keeps blood sugar in check; reduced sensitivity to insulin is a hallmark of prediabetes and diabetes). Their blood pressure also improved as much as if they’d been taking an average dose of blood pressure medication. And their appetite was reduced (a paradoxical outcome considering how many hours a day they weren’t eating, but predictable because their blood sugar had leveled out).

The researchers think that the results come from aligning eating times with natural circadian rhythms.

“If you eat late at night, it’s bad for your metabolism,” said lead study author Courtney Petersen, assistant professor in the UAB Department of Nutrition Sciences. “Our bodies are optimized to do certain things at certain times of the day, and eating in sync with our circadian rhythms seem to improve our health in multiple ways.”

Importantly, the benefits didn’t come from weight loss, because all of the participants ate enough calories to maintain their bodyweight. Rather, the results seemed to come directly from changing when they consumed the same amount of calories.

“Our body’s ability to keep our blood sugar under control is better in the morning than it is in the afternoon and evening,” added Petersen, “so it makes sense to eat most of our food in the morning and early afternoon.”

This was a small study of just eight participants and far from the last word on this topic, but as an initial proof-of-concept, the results are important. As diabetes continues to explode across an increasingly obese population, strategies like shifting eating times to stabilize blood sugar could make a big difference. Same for blood pressure – reducing the amount of medication patients take by changing when they eat is an approach that makes sense.

Having said that, time-restricted diets aren’t easy to follow. Compressing every meal between 6:30 am and 3 pm takes commitment and more than a little willingness to endure stomach grumbles, at least initially before blood sugar spikes level out. We’re accustomed to eating dinner in the 5 – 7 pm window, often followed by a snack or two later at night. Changing that mindset takes work.

Further complicating matters is the growing popularity of fasting diets, mostly unsupported by evidence-based science, but fueled, as all diet fads are, by public demand to conquer our bodies’ worst tendencies. The latest study uses a fasting method (since the participants didn’t eat for 18 hours instead of a typical 10 or 12), but the focus wasn’t on restricting calories via fasting, but rather shifting when they’re eaten.

More research with more participants is needed, no doubt, but these preliminary findings are worth some attention. Food choices matter, but when we consume the food we choose may matter just as much.

The study was published in the journal Cell Metabolism.

You can find David DiSalvo on Twitter, FacebookGoogle Plus, and at his website, daviddisalvo.org.

The Amazing Ways Samsung Is Using Big Data, Artificial Intelligence And Robots To Drive Performance

, Opinions expressed by Forbes Contributors are their own.

Until recently, Korean company Samsung was said to behind its competitors in terms of researching and developing artificial intelligence (AI) technology, but the company’s recent strategy suggests that it’s committed to closing the gap and even competing for the top spot. Since 70 percent of the world’s data is produced and stored on Samsung’s products, the company is the leading provider of data storage products in the world. By revenue, Samsung is the largest consumer electronics company in the world—yes, it has even overtaken Apple and sells 500 million connected devices a year. From industry events to setting goals with AI at the forefront to updating products to use artificial intelligence, Samsung seems to have gone full throttle in preparing for the 4th industrial revolution.

Adobe Stock

Adobe Stock

Bringing innovators together

Samsung started 2018 with intention to be an artificial intelligence leader by organizing the Artificial Intelligence (AI) Summit and brought together 300 university students, technical experts and leading academics to explore ways to accelerate AI research and to develop the best commercial applications of AI.

Samsung has Dr. Larry Heck, world-renowned AI and voice recognition leader, on their AI research team. At the summit, Dr. Heck emphasized the need for collaboration within the AI industry so that there would be a higher level of confidence and adoption by consumers and to allow AI to flourish. Samsung announced plans to host more AI-related events as well as the creation of a new AI Research Center dedicated to AI research and development. The research center will bolster Samsung’s expertise in artificial intelligence.

Bixby: Samsung’s AI Assistant

Bixby, Samsung’s artificial intelligence system designed to make device interaction easier, debuted with the Samsung Galaxy S8. The latest version, 2.0, is a “fundamental leap forward for digital assistants.” Bixby 2.0 allows the AI system to be available on all devices including TVs, refrigerators, washers, smartphones and other connected devices. It’s also open to developers so that it will be more likely to integrate with other products and services.

Bixby is contextually aware and understands natural language to help users interact with increasingly complex devices. Samsung plans to introduce a Bixby speaker to compete with Google Home and Amazon Alexa.

Page 1 / 2

Climate Change Made Zombie Ants Even More Cunning

Raquel Loreto is a zombie hunter, and a good one. But traipsing through dried leaves in a hot forest in Sanda, at the southern end of Japan, she needed a guide. Just a few months before, she’d been on the internet and come across the work of artist Shigeo Ootak, whose fantastical images depict humans with curious protrusions erupting from their heads. She got in touch, and he invited her to Japan for a hike to find his inspiration.

Ootak knew precisely where to look: six feet off the ground. And there in a sparse forest, that’s where they found it: the zombie ant, an entrancing species with two long hooks coming out of its back. By now you may have heard its famous tale. A parasitic fungus, known as Ophiocordyceps, invades an ant’s body, growing through its tissues and soaking up nutrients. Then it somehow orders its host to march out of the nest and up a tree above the colony’s trails. The fungus commands the ant to bite onto the vein of a leaf, then kills the thing and grows as a stalk out of the back of its head, turning it into a showerhead raining spores onto victims down below.

That’s how it all goes down in South American forests, where Loreto had already spent plenty of time. But the zombie she found on her hike in Japan was different. First of all, the fungus had driven it higher up a tree. And two, it hadn’t bitten onto a leaf, but had wrapped itself around a twig, hanging upside down.

See, in the tropics, leaves stay on trees all year—but in Japan, they wither and fall. Same goes for zombie ants in the southern United States. By ordering the ant to lock onto a twig, the fungus helps ensure it can stay perched long enough to mature and rain death on more ants. In a study out today in the journal Evolution, Loreto and her colleagues show that divergence between leaf-biting and twig-biting seems to have been a consequence of ancient climate change. So who knows, modern climate change may also do interesting things to the evolution of the parasite.

Come back in time with me 47 million years to an unrecognizable Germany. It’s much hotter and wetter. As such, evergreen forests grow not only up through Europe, but all the way up to the arctic circle. One day, a zombie ant wanders up a tree and bites onto the vein of a leaf, which conveniently enough gets fossilized. Time goes on. The climate cools, and Germany’s wet forests turn temperate.

Almost a decade ago, Penn State entomologist David Hughes looked at that fossil leaf and noticed the tell-tale bite marks of a zombie ant. “Given the fossil evidence in Germany, we know leaf biting occurred then,” say Hughes, a coauthor on the paper. “We suspect that it was also present in North America, and as those populations responded to climate change and the cooling temperature, we see a shift from biting leaves to dying on twigs.”

David Hughes

As vegetation changed from evergreen to deciduous, the fungus found itself in a pickle. But evolution loves a pickle. Ophio adapted independently in Japan and North America to order the ant to seek out twigs, which provided a more reliable, longer-term perch. The fungus grows much slower.

Loreto and Hughes know this thanks to the work of Kim Fleming, a citizen scientist who discovered zombie ant graveyards on her property in South Carolina. She’s been collecting meticulous data for the researchers, scouring the forest for the zombies and marking them with colored tape. “I made a map for myself so I wouldn’t get lost and leave some out,” says Fleming. (For her efforts, she’s now got a species of her very own: Ophiocordyceps kimflemingiae.)

What Fleming helped discover is that while in the tropics the fungus reaches full maturity in one or two months, in temperate climes like hers, the fungus sets up its zombie ant on a twig in June, but doesn’t reach maturity until the next year. In fact, the fungi may actually freeze over the winter. If it were attached to a leaf, it’d tumble to the ground in the fall.

“So it’s almost as if they’ve decided that nothing is going to happen this year, I’m just going to have to sit around because I don’t have time to mature and get spores out,” says Hughes. Plus, the ants hibernate in the winter anyway. Even if the fungus shot spores, there’d be no ants to infect—they’ll all chilling underground in their nest.

Opting for twigs does come with a downside, though: It’s really tough to get good purchase. Until, that is, the fungus initiates a second behavior, ordering the ant to wrap its limbs around the twig, sometimes crossing the legs on the other side of the twig for extra strength. “The hyphae of the fungus growing out of the legs works as glue on the twig as well,” says Loreto. “Sometimes they would even slide down the twig, but they wouldn’t fall.”

It’s hard to imagine how a fungus with no brain could figure this all out, but that’s the power of evolution. And it goes further: In June in temperate climes, the forest is still full of both twigs and leaves, yet the fungus directs zombie ants to lock onto twigs exclusively. And in the Amazon, where it’s lush all year round, they only ever lock onto leaves. “How in the name of … whoever … does the fungus inside the body know what the difference between the leaf and the twig is?” Hughes asks. It always has both options, yet only ever “chooses” one—the best strategy for its particular surroundings.

And so a parasitic manipulation that already defied human credulity grows ever more incredible, far beyond any work of zombie fiction. Your move, Hollywood.


More Great WIRED Stories

Real Advice for Real Money