Happy Towel Day

18056890646_4426436f4a_o

Fifteen years ago, froods and strags worldwide mourned the death of Douglas Adams, the witty author of Hitchhikers Guide to the Galaxy. If you have not read that book and its hilarious sequels, I urge you to do so.

Within his books, Adams famously wrote that a towel is “about the most massively useful thing an interstellar hitchhiker can have.” In honor of his wisdom, I thought I would share a bit about what makes a towel so good at what we Earthlings generally use it for – drying.

Cloth towels are generally made from cotton, a cheap but very absorbent fiber. Cotton is so absorbent because its fibers are essentially long, hollow tubes.

psm_v39_d189_magnified_fibers_of_silk_wool_and_cotton
Cotton fiber (C) has hollow channels that can carry water easily. Silk (A) and wool (B) fibers are less absorbent.

These tubes can easily suck up liquids in the same way that a straw can easily suck up water.

But while all cotton is more absorbent than other types of fibers, there’s a reason a cotton usually feels different than a cotton T-shirt or blanket. That’s because towels are woven in a special way to take advantage of cotton’s natural absorbency.

9453956442_f93bbda21d_o

12695764203_357a156725_b

Cotton in a T-shirt, as shown in both pictures above, is generally woven tightly to produce a strong, yet comfortable fabric. However, this tight weaving exposes relatively little of the surface of the cotton, meaning each fiber will encounter less liquid and thus have less chance to absorb it.

frottee-handtuch

In contrast, a towel’s surface has loosely woven strands of cotton sticking out of it, exposing the surface of many fibers to any liquid the fabric encounters. This allows towels to quickly absorb a lot of liquid. Think about cotton fibers sucking up water next time you get out of the shower.

And as Mr. Adams instructed: Don’t panic, and don’t forget your towel.

Advertisements

The Galactic Recession: A case study

582093783_66bc2980b0_o

In honor of the rapidly approaching release of a new Star Wars movie, I wanted to follow-up on my article yesterday on humorous yet serious research with another light-hearted piece of scholarship.

It's a trap

This insightful and nerdy reference-laden paper from a Washington University engineering professor looks at the financial impacts that the destruction of two supermassive battle stations and the decapitation of the galactic government would have on the Star Wars universe. As a devoted Star Wars fan myself (if you haven’t already noticed), I find this an especially intriguing area of study because Star Wars: The Force Awakens and the books and videos games released with it are set to describe the chaotic years after the celebrations at the end of Return of the Jedi (spoilers in that link for those concerned).

Using estimates in U.S. dollars for the costs of constructing the two Death Stars ($193 quintillion and $419 quintillion respectively) Prof. Zachary Feinstein extrapolates the Imperial economy to have an annual “gross galactic product” (GGP) of $4.6 sextillion dollars.

Assuming, that the Galactic Empire’s banking sector held assets of around 60 percent of GGP, Feinstein modeled various scenarios for what would happen after the second Death Star was destroyed and the Empire presumably defaulted on its payments for the battle station and its predecessor.

He concluded that the Rebel Alliance would likely need to quickly provide a bailout equivalent to between 15 and 20 percent of the GGP to prevent the galactic defaults from triggering a massive recession. Since the Alliance was a relatively small insurgent group, it is highly unlikely that they would have had that amount of money available to inject into the economy they had newly inherited. This means that a prolonged economic depression could play a role in the setting to The Force Awakens. Just something else to think about it in 10 days, nerdy readers.

Can you research anything? Bullshit.

pseudo-profound

As we enter the holiday season, it seems appropriate to note the humorous side of scientific research. Scientists are people too, and while readers of my blog have already seen a hilarious April Fools announcement from the folks at CERN, there have been many lighthearted papers in the 350 years (and 9 months) of scientific publishing, and even whole journals and prizes devoted to humorous research.

BMJ (formerly the British Medical Journal) is one of the world’s preeminent disseminators of medical science, but that hasn’t stopped its 32-year tradition of publishing an annual “Christmas issue” of fully peer-reviewed research about not-quite-serious topics. Over the years, readers have learned how speed bumps can be used to diagnose appendicitis, why patients complain that magazines in medical waiting rooms are too old and stodgy for their tastesthat the Ice Bucket Challenge spread among celebrities last year at about the same rate as swine flu did during its 2009 pandemic and what are the potential side effects of sword swallowing.

The authors of that last article were honored with an “Ig Nobel Prize” – the premier humor award in science. Founded in 1991, these annual awards are a sly perversion of their more prestigious namesake. Prizes are given out in categories including biology, chemistry, medicine and economics, and each award is presented by an actual Nobel Prize winner.

A look through the winners of past awards means seeing some quirky research, ranging from a 2013 Psychology prize for a study proving that “people who think they are drunk also think they are attractive,” to a 1995 Physics prize for research showing that cereal gets soggy when water is added to it, to a 2003 Biology prize for a very detailed eyewitness report (including pictures) of a live mallard duck having homosexual relations with a dead mallard duck. While these articles rightfully sound humorous, they’re all published by actual scientists and have real-world implications. One Ig Nobel prize winner has even gone on to win a Nobel Prize – physicist Andre Geim.

“I have always been interested in things that are funny in a way that makes you pay attention to them and keep paying attention,” creator Marc Abrahams said while reflecting on the 25th anniversary of the prizes this year. Abrahams has spent most of his career working in humorous science as editor of the Annals of Improbable Research, a journal to highlight “research that makes people laugh then think.”

That phrase is a good description of the study featured at the top of this post. This article from several Canadian researchers in the journal Judgment and Decision Making looks at how people react to hearing “pseudo-profound bullshit,” or impressive statements that sound deep but actually mean nothing. Since I currently work as a journalist in Washington, D.C., I have nearly become numb to hearing stuff like this on a daily basis, but the researchers focused on studying the reception to vacuous statements similar to those of the “New Age guru” Deepak Chopra.

The researchers were able to design and test a “bullshit receptivity” scale to accurately gauge how profound participants may think a vacuous statement is. They found that a variety of factors could affect how one perceives bullshit – people who held strong supernatural beliefs were less likely to recognize it, for example, while those who scored higher on cognitive tests were more likely to.

What makes this research potentially so useful is the fact that modern-day life is, in a sense, inundated with bullshit, from political rhetoric to aggressive advertising to online clickbait.

“With the rise of communication technology,” wrote the authors in concluding their article, “people are likely encountering more bullshit in their everyday lives than ever before.”

Antibiotic Resistance Trends

Capture

Hello loyal readers – I have a post in the works about everyone’s favorite controversial science topic that isn’t climate change – vaccines! But until then, here’s an interesting addendum to my last post on antibiotic resistance.

This new online tool from the Centers for Disease Control and Prevention lets you view how antibiotic resistance has increased over the past 20 years in four different kinds of foodborne bacteria. Check it out, because who doesn’t want to see something depressing on a nice, late-summer day?

‘The end of antibiotics’

In this example of the Kirby-Bauer Disk Diffusion Susceptibility Test, E. coli bacteria are streaked on a dish with white paper disks containing seven separate antibiotics. The clear areas around the disks show where bacteria have been killed by the antibiotics. The E. coli at right is resistant to most of the antibiotics tested.

Many of us are familiar with the legend – in 1928, Scottish scientist Alexander Fleming noticed that mold infiltrating his petri dishes killed the bacteria in them, thus discovering penicillin. While antimicrobial substances had been used with some success by the ancient Egyptians and Greeks, Fleming’s discovery revolutionized medical treatment by allowing doctors to kill the sickening microbes that Louis Pasteur had proven to exist just 50 years before.

Now, more than 100 antibiotics exist, and they are a vital and omnipresent part of modern medicine. By one estimate, antibiotics save the lives of roughly 200,000 Americans annually and add 5-10 years life expectancy at birth.

The tragedy of the commons

The “tragedy of the commons” is one of the most famous and sobering concepts in analyzing human behavior – if enough humans behave only in their self-interest in using a resource, they could deplete or ruin that resource for their entire group. Ecologist Garrett Hardin’s famous 1968 exploration of this social dilemma focused on overpopulation and pollution, but it is a concept that applies all too well to the use of antibiotics.

“Antibiotics are a limited resource,” wrote the Centers for Disease Control and Prevention in a 2013 report on antibiotic resistance. “The more that antibiotics are used today, the less likely they will still be effective in the future.”

Antibiotics have often been deemed “magic bullets,” and their wonderfully curative properties make them seem almost unstoppable. But that unrelenting force behind all life on Earth also works against antibiotics – evolution. Just as humans have evolved to resist many diseases and toxins that once killed us, bacteria can and do evolve to resist the antibiotics that kill them. But while humanity’s relatively long gestation and maturation periods make our evolution proceed slowly, bacteria can reproduce roughly every 20 minutes, or 72 generations per day. With that kind of reproduction rate, the odds are that eventually a mutation will occur in a bacterium that allows it to survive an antibiotic treatment.

The problem results from two phenomenon – the overprescription and use of antibiotics by people, and the preventative use of antibiotics in livestock.

As much as 50 percent of the time, antibiotics are used unnecessarily in medical treatment, according to the CDC. Every exposure of an antibiotic to bacteria is another opportunity for the bacteria to develop a genetic mutation that lets it survive, and an environment free of helpful bacteria that that help control humans’ microbial populations.

Not only can people spread this drug-resistant bacteria to others, as demonstrated in the chart above, but bacteria themselves can spread their drug resistance to other bacteria they encounter through a process called gene transfer.

The chart below from the National Institutes of Health depicts one method of gene transfer – bacterial conjugation, in which a gene can pass directly from one bacterium to another. Several other methods, including transformation and transduction, also exist.

5080768325_54ed60e233_o

Gene transfer is what antibiotic resistance such a persistent and fast-developing problem. It’s like someone figuring out the combination to a safe, and then giving it to everybody they met, and all those people giving the combination to everybody they met. Soon, the safe can stop nobody.

The other big issue is the use of antibiotics in livestock to help them grow more quickly and avoid potentially costly infections.

It’s estimated that 30 million pounds of antibiotics are given annually to livestock just in the U.S. That is several times the amount given to the country’s 318.9 million people, because antibiotics are routinely given to perfectly healthy animals. Bacteria in these animals are constantly exposed to antibiotics, which cull the weak bacteria while giving the strong ones more opportunities to develop mutations to resist them.

As far back as 1977, the U.S. Food and Drug Administration has tried to curtail this overuse of antibiotics by farmers, but it’s been stymied by powerful agricultural interests in the business world and Congress.

The polar opposite of the U.S. is the Netherlands, where the government and agricultural businesses collaborated to end the use of preventative antibiotics in livestock more than a decade ago. Stringent and common-sense cleaning and veterinary practices have allowed the country’s livestock industry to continue to flourish.

The timeline from the CDC report shows how quickly bacteria can develop resistance to an antibiotic – oftentimes, it can happen in just a year or two.

Fitter bacteria=deadlier bacteria

What perhaps some officials clung to as hope in the fight against antibiotic resistant bacteria was a hypothesis that the development of resistance by the bacteria to antibiotics made them less deadly and more vulnerable to other methods of killing them.

A paper published this week in Science Translational Medicine refutes this hunch, however, by finding that bacteria that have developed drug resistance actually proved to be more deadly and infectious to mice than their non-resistant brethren.

The conclusion “raises a serious concern that drug-resistant strains might be better fit to cause serious, more difficult to treat infections, beyond just the issues raised by the complexity of antibiotic treatment,” wrote the authors.

“We’re in the post-antibiotic era”

That was the grim pronouncement made by Dr. Arjun Srinivasan, an associate director of the CDC, in a 2013 PBS Frontline documentary.

“For a long time, there have been newspaper stories and covers of magazines that talked about ‘The end of antibiotics, question mark?'” Srinivasan said. “Well, now I would say you can change the title to ‘The end of antibiotics, period.'”

One of the most visible and deadly consequences of antibiotic resistance is the persistent spread of Methicillin-resistant Staphylococcus aureus, or MRSA. First seen in the 1960’s, it is a deadly bacteria that often hits hospitals, infecting the weak and healthy alike. Though the rate of MRSA infections has declined it recent years, more than 11,000 die from MRSA-related causes each year.

Things don’t look great for the future, but doctors, scientists and lawmakers are still working to stop antibiotic resistance from causing medicine and ultimately to backslide.

“In a world with few effective antibiotics, modern medical advances such as surgery, transplants, and chemotherapy may no longer be viable due to the threat of infection,” warned the White House in a 2014 report.

I personally hope that world never comes.

Turf War

If you’ve been following the FIFA Women’s World Cup these past few weeks (go USA), you’ve probably heard talk about turf, or more specifically, artificial turf. This is the first World Cup to ever be played on artificial turf, and many players are not happy about it. Allegations of gender discrimination were made, and a lawsuit was filed and later dropped. What I found myself wondering, and you may be too, is why make a big deal over turf?

The grass is always greener when it’s fake

For thousands of years, humans played sports on regular old grass. But when roofed sports stadiums started being built in the 1960’s, a fairly significant problem arose – grass needs sunlight to grow. After a hilarious experiment with painting dead grass green, the Houston Astrodome installed a brand new synthetic turf called ChemGrass in 1966. The turf was renamed AstroTurf soon afterwards by its inventors, the chemical company Monsanto, and was soon installed in many similar stadiums nationwide. Even outdoor fields that received plenty of sunlight were soon being replaced with AstroTurf as owners realized that this fake grass never needed to be cut, watered or replanted, and would stay usable and attractive in any kind of climate conditions. In 1970, the commissioner of the National Football League predicted every team would soon be playing on “weather-defying phony grass,” as the newspaper article described it.

AstroTurf usage increased up through the ’80’s until nearly 2/3 of NFL teams, at least a dozen baseball teams and a handful of English soccer teams were playing on it. Eventually, players and fans began to react negatively to AstroTurf for its effects on the game and their health (discussed below), and many fields were transitioned back to natural grass or a newer version of artificial turf called FieldTurf.

The Mechanics

There were several reasons that sports teams ditched AstroTurf, but they all boiled down to the fact that playing on it was almost nothing like playing on natural grass.

AstroTurf was essentially a nylon carpet laid over concrete. It got much hotter than grass, so much so that baseball players standing on it for long periods in hot weather got their cleats melted. It was hard and springy, meaning balls that struck it went much higher than on normal grass. This same factor meant that players who hit it didn’t fare too well either. And the nylon surface was much easier to grip – a boon for some looking to run faster, but a major stress on the knees of many players, as a 1992 study in The American Journal of Sports Medicine showed.

Modern artificial turf (nowadays usually a version of FieldTurf), is more complex. Individual grass-like fibers are attached to a hard polymer base, with a layer of rubber granules simulating the cushioning dirt of a natural grass field.

The Controversy Continues

While modern-day turf is undoubtedly much better to play on, the jury is still out on how safe it really is. A 2013 study in Portugal found that amateur soccer players were injured at higher rates on artificial turf than on grass (though other studies have found no difference in injury rates).

Artificial turf was never popular in soccer, and FIFA, the sport’s global governing body, never allowed it to be used for World Cup matches before this year. But natural grass fields are difficult to grow in Canada, where the Women’s World Cup is being held this year, so FIFA bit the bullet and signed off on it.

The reaction was swift.

“There is no player in the world, male or female, who would prefer to play on artificial grass,” U.S. forward Abby Wambach told the Washington Post. “There’s soccer on grass, and then there is soccer on turf.”

Forward Sydney Leroux described playing on turf as “running on cement” to Vice Sports, and her celebrity friend Kobe Bryant even tweeted a picture of her bruised and bloodied legs after playing on turf to prove the abrasiveness of the surface.

FIFA was accused of sexism – not a hard allegation to make for an organization whose president once suggested women’s soccer would be more popular if the players wore “tighter shorts.” A lawsuit was filed by dozens of soccer players, though it was dropped in January.

FIFA hasn’t ruled out using turf again, though the issue is largely moot for the foreseeable future – all three of the coming World Cups (2018, 2019 and 2022) will use grass fields.

The Risky Business of Measuring a Mountain

Mt. McKinley is unquestionably the tallest mountain in North America, but its actual height is pretty questionable. The last full survey of the mountain, also known by its Native American name Denali, measured its peak at 20,320 feet tall in 1953, but that number has since been disputed twice by researchers using newer technology.

Now, the U.S. Geological Survey is looking to settle the matter for good using a mix of precise technology and old-fashioned manpower.

“Surveying technology and processes have improved greatly since the last survey and the ability to establish a much more accurate height now exists,” said the agency in a statement Monday.

A team of four scientists who presumably lost a bet will this month haul GPS equipment to the top of the mountain and back down to accurately size it up. Let’s hope they’re good, because Mt. McKinley is one of the difficult peaks in the world to climb – more than 130 people have died while climbing it, and only about half of those who attempt to ascend it succeed.

The climbers are expected to be finished by July 7, and the USGS hopes to publish its data in August.

UPDATE: I’ll be back soon.

Loyal readers:

Sorry for my long absence from here – I’ve been transitioning to a new job in a new city. It has been exciting and exhausting all at once, but now that I am settled in, I plan to return to distilling my love of science into bite-sized blog posts.

So watch your inboxes and RSS feeds, and feel free to write me with any comments, suggestions, recipes, jokes, insults, etc.

-Ben

On My Own – The Self-Assembly of Early DNA

Let’s face it: The young planet Earth was not a pleasant place to be. After coalescing from the protoplanetary disk of gases and dust left over from the formation of the Sun 4.5 billion years ago, the Earth’s surface had finally cooled to solid rock 1 billion years later (in between, it got hit by a planet-sized object to form the Moon and went through something called the “iron catastrophe,” which somehow isn’t being used as a band name). At this time, the Eoarchean Era, Earth’s atmosphere consisted solely of  gases spewed by the planet’s many active volcanoes, meaning you’d find toxic ammonia and methane, but little free oxygen and no ozone layer to stop the Sun’s radiation. But crucially, an atmospheric pressure 10 times that of today’s allowed liquid water to exist on the scorching hot surface, and that was the key to life.

A paper published last month in Nature Communications presents a new take on how that first stirring of life could have formed. In particular, it looks at how “liquid crystals” could have driven the formation of primitive DNA.

The Earliest Days

Abiogenesis is the term for the spontaneous generation of life out of lifeless materials. Yet though this process has a name, it’s mechanics have been the subject of fierce debate since the time of Aristotle. For many centuries, it was thought that abiogenesis occurred all the time – maggots seemed to appear without cause in rotten meat, for example. Eventually, however, the work of Louis Pasteur and other scientists proved that biogenesis (“life from other life”) is the observable cause of all life on Earth, but that didn’t solve the question of where the first living organism came from.

Two overarching explanations have emerged for the origins of life on Earth – that it started here spontaneously, or that it was “seeded” from somewhere else. The “panspermia” theory, as the latter is called, has some interesting research associated with it, but it isn’t the topic of this blog post. The other is that microscopic life spontaneously formed from Earth’s primordial soup of inorganic, volcanic compounds.

Since the 1980s, the dominant theory on the origins of life has been “the RNA World,” which hypothesizes that all life arose from a primitive form of ribonucleic acid, a single-stranded molecule found in all life and even some non-living viruses. RNA is relatively simple, and most importantly is able to self-replicate from non-living molecules, making it a strong candidate for the first biological reproduction. However, the molecules that comprise RNA, known as nucleotides, are complex, leading many scientists to be skeptical of how enough of them could have formed randomly at the same time to form primitive RNA.

In a paper titled “Abiotic ligation of DNA oligomers templated by their liquid crystal ordering,” Italian and American scientists provide by an answer, by showing how a unique physical process could have driven the assembly of nucleic acids to form RNA and DNA.

Liquid crystals are atoms in a state of matter between solid crystals and liquids. They are truly in the middle – they retain the homogenous alignment of solid crystals, but have the ability to flow like liquids. Most people have encountered them in the form “liquid crystal displays,” or LCDs – screens that use liquid crystal molecules to respond to electrical signals to form a picture.

Organic molecules can also be liquid crystals, and this paper found that very short DNA oligomers, or fragments, start to order themselves randomly in solutions in crystalline structures that could have been the basis for nucleotides, and later DNA and RNA.

“We envision our findings as a paradigm of what could have happened in the prebiotic Earth based on the fundamental and simplifying assumption that the origin of nucleic acids is written in their structure,” the authors wrote in the discussion.

Strong in The Force, CERN is.

At Switzerland’s European Organization for Nuclear Research (CERN), scientists working on the Large Hadron Collider announced the discovery of a new fundamental force for the universe, colloquially referred to as “The Force.”

“The Force is what gives a particle physicist his powers,’ said CERN theorist Ben Kenobi of the University of Mos Eisley, Tatooine. “It’s an energy field created by all living things. It surrounds us; and penetrates us; it binds the galaxy together.”

Read more about this new hope for Earth at CERN’s blog, and I hope you had a happy April Fool’s Day.