Sam Van Aken is an art professor at Syracuse University in New York. He wasn’t always immersed of the world of art though- as a child, he grew up working on his family’s farm before pursuing his art career.
So, in 2008, when Van Aken learned that the orchard at the New York State Agricultural Experiment Station was about to be destroyed because of a lack of funding, he knew he had to put his farming past to use.
Many of the trees in the orchard were 150-200 years old, and grew ancient, antique native stone fruits varieties that have been mostly hybridized or modified by modern agricultural practices (commercially-grown fruits are selected for their look and size more than any other factors, including taste).
Aken knew he had to save these rare and ancient fruit varieties, so he bought the orchard and spent the next couple years trying to figure out how to graft parts of multiple trees onto one single tree.
He started by creating a timeline of when all the varieties of fruit (about 250 total) blossomed, so he could know precisely when to graft a new variety onto the main tree.
The grafting process basically involves making an incision in the main tree, and then inserting a shoot from the tree you want to add.
When the tree was young, he grafted directly onto its root structure. Once it reached two years old, Aken began using “chip grafting” to add new varieties of fruit to various branches.
Chip grafting involves cutting a small notch into a branch of the main tree. Then, a sliver of the tree to be added (including a bud) is inserted into the notch and taped in place. Over winter, the tree heals the incision, and in doing so incorporates the new fruit variety into that branch.
After five years, Aken completed his first “Tree of 40 Fruit”, as he calls them.
For most of the year, it looks pretty much like a normal tree, but in spring, it explodes with white, red and pink blossoms before bearing its various ancient varieties of plums, peaches, apricots, nectarines, cherries and almonds.
Since then Aken has planted 15 more “Trees of 40 Fruit” in museums, community centers and art galleries around the country. His next plan is to create an orchard of them in a city setting.
If I told you I could make a glass of liquid go from being totally clear to almost completely black in a split second, you would think I was crazy.
But science has a way of making crazy things happen. The iodine clock reaction is very real and very awesome. Check it out in the video below:
So what’s going on chemically? Well basically, it’s all comes down to the iodine and the sulfur.
Mixing ionic compounds into a solution with water causes them to separate into their basic components.
In the first glass, the ionic compound sodium sulfite (Na2SO3) divides itself into two sodium ions (2Na+) and a sulfite ion (SO3−).
Na2SO3 → 2Na+ + SO3−
This sulfite then steals one of the hydrogens from the citric acid (C6H8O7) in the mixture, creating bisulfate, HSO3.
SO3− + H+→ HSO3
In the second glass, the sodium iodate (NaIO3), separates into sodium ions (Na+) and iodate ions (IO3−).
NaIO3→ Na+ + IO3−
When the two glasses are mixed, a number of reactions happen. First, the iodate ions react with the bisulfite (HSO3) to produce hydrogen sulfate (HSO4). This leaves the iodide ions (I−) by themselves.
IO3− + 3 HSO3− → I− + 3 HSO4−
Then the excess iodate reacts with the iodide ions and hydrogen ions to form iodine (I2) and water.
IO3− + 5 I− + 6 H+ → 3 I2 + 3 H2O
But just as soon as the iodine is created, it is reduced back into iodide ions by the bisulfite still in the solution from the initial reaction.
I2 + HSO3− + H2O → 2 I− + HSO4− + 2 H+
The first two reactions happen relatively slowly, but this third reaction happens almost instantaneously every time an iodine molecule is created.
Eventually though, the supply of bisulfite runs out, allowing the iodine molecules to survive. This gives the iodine an opportunity to react with the starch that was dissolved into the water at the beginning, producing an extremely dark shade of blue.
Adding a little bit of bisulfite back into the mix immediately re-ionizes the iodine (breaking it into separate I− molecules again) turning the water clear once more until that bisulfite has been used up as well (which is why the water darkens back up).
The experiment is called the “clock reaction” because you can control how long it takes for the dark color to appear by adjusting the amount of bisulfite.
“The President” is one of the world’s largest, oldest and most famous trees.
This giant sequoia is located in the only place giant sequoias are found: on the western slopes of the Sierra Nevada Mountains in California.
The President is surrounded by smaller trees, which are referred to as the “House” and “Senate” to stick with the political theme. It is approximately 3,200 years old and measures 247 feet high, 27 feet in diameter, and 45,000 cubic meters in volume.
Because of its massive size, the tree had never been captured in a single image. That is, until a group of National Geographic scientists and photographers got together to study and photograph the iconic tree back in 2012.
The team battled cold temperatures while putting together intricate pulley and lever systems to scale the tree. It took a total of 32 days and 126 individual frames to stitch together a full, single image of the tree. It is the first full image of The President ever:
The video below includes footage from those 32 days and shows how the image above came together:
Check out some more images of The President below:
“Verrückt” is the German word for “insane”. It is a fitting name for the world’s tallest waterslide, which was just opened to the public at the Schlitterbahn Water Park in Kansas City.
At 168 feet and 7 inches, the Verrückt is taller than Niagara Falls. To get the top you have to climb 268 stairs.
John Schooley was the engineer who designed the slide. Here he is talking about when he and park founder Jeff Henry came up with the idea:
“Basically, we were crazy enough to try anything. We decided to design something entirely new, because we decided to put a three or four man boat down it, and we wanted not only the fastest and steepest water slide going downhill, but we wanted to take it uphill over a hump, to give people a weightless experience going down the other side.”
Schooley was also the first to test out the slide, along with another one of the slide’s engineers. Speaking later about the experience he said, “I was terrified.” Check out video of that first test run below:
That second hill is one of the coolest features of the slide. Because of the speed and momentum you build up going down the first slope (you drop 17 stories in 4 seconds), G-force can feel up to 5 times greater than normal as you travel up the second hump.
G-force is defined as a measurement of acceleration felt as weight. Basically, it’s the perceived increase in gravity you feel because of the fact that you’re accelerating. G-force is what pushes you back into your seat as a plane takes off, for example.
So, when you reach the top of that hump and begin the second drop, you go from feeling like gravity is 5x stronger than normal (5 Gs) to feeling weightless in a split-second. It’s not unlike what astronauts experience when they leave Earth’s atmosphere (although the G-force they feel is many times higher).
The slide was opened to the public this past Friday. Here’s what it looks like to to ride the Verrückt as a member of the public. Garmin VIRB sports camera technology allows you to track speed and heart-rate as you watch:
Olfactory receptors are the cells which give us our sense of smell. The average human has five to six million of these olfactory receptors in their nose.
Though there are other creatures with more powerful noses (dogs have up to 220 million olfactory receptors), the human sense of smell is actually one of the more acute in the animal kingdom.
But olfactory receptors aren’t just in the nose. In recent years, scientists have been finding them in all kinds of strange places: the spine, the kidney- even in sperm!
Recently, a group of researchers from the Hanns Hatt’s lab at Germany’s Ruhr University of Bochum discovered that these smell cells are also in our skin. And what’s more, these olfactory receptors seem to be involved in the healing process. Their results were published in the journal Nature.
One of the olfactory receptors they found in the skin is known as OR2AT4. Furthermore, the researchers found that Sandalore (a synthetic sandalwood oil that’s often used in aromatherapy) bonded to the OR2AT4 receptors in the skin.
But rather than sending a signal to the brain when it bonded (like the receptors in your nose do), the Sandalore triggered the skin cells to divide and migrate- the two processes that your skin uses to heal itself.
In their experiments, the researchers mixed skin cells with Sandalore in test tubes and cultures for five days. They found that in the presence of Sandalore, new skin cells were created (through cell division) 32% faster and migrated 50% more than skin cells that hadn’t been exposed to the oil.
The results were undoubtedly impressive, but the researchers also pointed out that just like everyone’s noses are different, so are the smell receptors in our skin. Some people have more, some have less.
Just how much of an impact sandalwood oil has on the healing process depends on the amount and the type of olfactory receptors in your skin.
We live in a world saturated with sensory stimulations. From our cell phones to our laptops and TVs, almost our entire day is a marathon of sights and sounds, all competing for our increasingly short attention spans.
So you would think most people would enjoy the opportunity to get away from it all and gather their thoughts. But a recent study from the University of Virginia found quite the opposite.
In fact, many of the participants even started giving themselves electric shocks as their time alone dragged on.
Psychologist Timothy Wilson led the study, which was recently published in the journal Science. He had this to say about the results:
“I think many of them were trying to shock themselves out of boredom… It’s just a sign of how difficult (being alone with one’s thoughts) can be for people…. This isn’t something that most people find really enjoyable.”
For the study, 55 college students agreed to give up all distractions (like cell phones, tablets and mp3 players) and spend between six and 15 minutes in a sparsely furnished room on UVA’s campus. Afterwards they were asked to rate their enjoyment on a scale of 1-9.
The average rating was pretty much right in the middle. In other words, the average student was pretty much indifferent to the idea of spending a few minutes alone.
The results also meant that half of the students rated the experience as unpleasant. But the most unsettling findings involved the electric shock.
Before entering the room, participants were given an electric shock on their ankle so that they could gauge how painful it was. They were then told that they could shock themselves again during their time alone if they wanted to.
Of the 55 participants, 42 said that they would be willing to pay to avoid being shocked again. But shockingly (pun intended), 18 of these 42 students (~43%) ended up shocking themselves anyways.
It seems that the students decided that even a jolt of pain was worth it to break the boredom of their seclusion.
Wilson was definitely surprised by the results. It baffled him that it was so difficult for the students to use their brains to entertain themselves:
“All of us have pleasant memories we can call upon, we can construct stories and fantasies.”
But he thinks that the unfamiliar environment (ie. an empty room) throws off our normal thought processes:
“I think it’s an issue of mental control. The mind is built to engage in the world and when you give it nothing to engage it, it’s hard to keep one train of thought going for very long.”
Wilson added that he didn’t think the phenomenon was a modern one, because there were complaints of people not taking the time to sit and contemplate as far back as ancient Roman times.
Personally, I think this is a pretty weak justification for his hypothesis. Ancient Rome was a very advanced society for its time, but it was a far cry from our modern world technologically.
The average Roman had to spend a much larger portion of their time doing typically grueling physical labor, leaving them physically exhausted at the end of the day.
In our modern world, many of us still come home from work exhausted, but it’s more a result of brain exhaustion than the overworking our bodies.
Also, we have become extremely dependent on our mobile devices in the last decade or so. It’s become instinct for young people to check Twitter/Facebook/Instagram any time we get bored, and I think the students in the study experienced some withdrawals when they no longer had access to this digital crutch.
Whatever the case may be, the results of the study should make all of us take a look at our own lives and see where we can find time to reflect and make sense of all the information we process in this fast-paced world.
The average brain is only able to process seven pieces of information at a time (this is why phone numbers are an area code plus seven numbers). Our smartphones alone constantly take up a significant portion of these seven slots (thinking about your texts, a picture you just Instagrammed and a Tweet you just read is already 3 of those 7 slots).
This is why it’s so important to make time to sort through your thoughts, free of any other distractions. You may be surprised at what you find in your own mind when you take the time to listen.
Five years ago, Robert Whelan, a former postdoctoral fellow in psychiatry at University of Vermont (UVM) and current lecturer at University College in Dublin, joined forces with Hugh Garavan, associate professor of psychiatry at UVM.
The pair of psychiatric researchers wanted to see if they could determine the factors that predicted binge drinking in teens.
In the largest longitudinal (long-term) adolescent brain imaging study to date, they gathered 2,400 14-year-olds from 8 regions across Europe, putting each of them through 10 hours of assessments. These tests included, “neuroimaging to assess brain activity and brain structure, along with other measures such as IQ, cognitive task performance, personality and blood tests”.
Here’s Robert Whelan describing the researchers’ hopes for the study:
“Our goal was to develop a model to better understand the relative roles of brain structure and function, personality, environmental influences and genetics in the development of adolescent abuse of alcohol… This multidimensional risk profile of genes, brain function and environmental influences can help in the prediction of binge drinking at age 16 years.”
They have kept up with the teens since the initial tests 5 years ago, keeping track of which teens developed habits of binge drinking.
Whelan and Garavan’s study, recently published in the journal Nature, attempted to predict which teens would be binge drinking by the age of 16 using only the data collected when the teens were 14.
By examining around 40 different variables, including factors like brain function, genetics and family history, the researchers were able to design a unique analytical method to predict binge drinking in the test subjects. Here’s Hugh Garavan:
“Notably, it’s not the case that there’s a single one or two or three variables that are critical… The final model was very broad — it suggests that a wide mixture of reasons underlie teenage drinking.”
As Garavan points out, there weren’t a few major factors that were primarily responsible for putting teens at risk- rather, it was the combination of a number of different, seemingly unrelated factors that predisposed a teen to binge drinking.
The best predictors of binge drinking, according to Garavan, were personality, thrill-seeking tendencies, lack of conscientiousness, and a history of drug use in the family. Teens who had experienced stressful life events, like a divorce or family death, were also more likely to binge drink.
But there was another somewhat surprising find: bigger brains predicted higher chances of binge drinking. As our brains mature during adolescence, they destroy rarely-used neural connections to increase efficiency. This can actually shrink the brain.
Here’s Garavan again:
“There’s refining and sculpting of the brain, and most of the gray matter — the neurons and the connections between them, are getting smaller and the white matter is getting larger… Kids with more immature brains — those that are still larger — are more likely to drink.”
Putting all of these factors together, Whelan and Garavan created a model that predicted with 70% accuracy which 14-year-olds in the study would become binge drinkers by the age of 16.
Gunter Schumann is a professor of biological psychiatry who heads the Social, Genetic and Developmental Psychiatry Center at the King’s College (London) Institute of Psychiatry. He was the principal investigator for the study. He hopes that this new research will help identify and support at-risk teens early on in their adolescence:
“We aimed to develop a ‘gold standard’ model for predicting teenage behavior, which can be used as a benchmark for the development of simpler, widely applicable prediction models… This work will inform the development of specific early interventions in carriers of the risk profile to reduce the incidence of adolescent substance abuse.”
Schumann also adds that the data collected from this study will be used to further investigate how environmental factors affect the development of patterns of substance use.
In 1772, French nobleman and chemist Antoine Lavoisier used a lens to concentrate the sun (magnifying-glass style) on a diamond in an atmosphere of oxygen. The diamond released only carbon dioxide (CO2), proving that diamonds were made up only of carbon.
Then in 1779, English chemist Smithson Tennant further bolstered the findings by burning both graphite (which is also composed completely of carbon) and diamonds, and showing that the amount of gas produced by the two minerals matched the chemical equivalence he had established for them.
From that point on, the race to manufacture a synthetic diamond was on. It become a sort of holy grail for both scientists and scam artists alike at the time.
Individuals claimed to have successfully manufactured diamonds a number of times over the next century and a half, but none of their claims proved to be valid or their experiments reproducible.
Enter Howard Tracy Hall, who typically referred to himself as H. Tracy Hall or simply Tracy Hall.
Hall was born in Ogden, Utah in October of 1919. He was a bright kid: his hero was Thomas Edison and he announced in the fourth grade that he would one day work for General Electric.
After spending two years at Weber College, he got his bachelors and masters at the University of Utah in Salt Lake City.
He then spent two years in the Navy before heading back to the University of Utah to get his Ph. D. in physical chemistry. He finished the graduate program in 1948.
Just two months later, he realized his childhood dream: GE offered him a position in their Research Lab in New York, working on “Project Superpressure”, which aimed to manufacture a synthetic diamond.
When Hall arrived at the lab in New York, GE was in the process of buying a massive $125,000 press that was capable of generating pressures up to 1.6 million pounds per square inch in a confined space.
Hall wasn’t impressed. He had previously built his own pressure chamber from a salvaged 35-year-old Watson-Stillman press, and thought he could create a better machine with only an additional $1,000.
Unfortunately, GE wasn’t interested. They refused to give him the funds or to even let him use their state-of-the-art machine shop to build it.
But Hall wasn’t going to be stopped. He got a friend and colleague to let him use the machine shop after hours and got a former supervisor to persuade the company to purchase the expensive carboloy (tungsten carbide dispersed in cobalt) that Hall needed to build the chamber.
On December 16, 1954, almost all of the researchers had left for Christmas break. Hall, on the other hand, was in the lab by himself, preparing for final testing of his new pressure chamber. He had experienced a number of false starts, but was stubborn in his pursuit.
He later described the moment when he unsealed his apparatus:
“My hands began to tremble; my heart beat rapidly; my knees weakened and no longer gave support. My eyes had caught the flashing light from dozens of tiny . . . crystals.”
Hall tried the test a couple more times, and got the same result every time. He then had a colleague, Hugh H. Woodbury, reproduce the experiment. He too, created diamonds.
Hall reported his discovery to GE officials. They initially thought his findings were exaggerated, but after being shown the experiment in front of them (with Hall outside the building), they were convinced.
On February 14, 1955, GE announced that it had manufactured the first synthetic diamonds. Media outlets around the world trumpeted it on the front page.
For his efforts, they gave Hall a $10 savings bond. “Big deal,” he said later.
The diamonds weren’t large enough or of high enough quality to be sold as jewelry, but since diamonds are one of the hardest minerals on earth, they were perfect for industrial applications, allowing us to cut and harvest minerals that had been impossible to collect before.
Upset by the lack of credit, Hall left GE for BYU shortly after the announcement. However, the work was so ground-breaking that the government slapped a secret label on Hall’s device, preventing him from using it in his research.
Still, Hall refused to be stopped. He designed a new apparatus, called the tetrahedral press, which was even better than the first one and circumvented all of the patents held by GE.
He published his work on the new pressure chamber in a popular scientific journal. The government responded by slapping another secret label on the new device.
However, the government lifted this second secret label a few months later, allowing Hall access to his invention. He and two other colleagues would later start MegaDiamond, which remains one of the largest synthetic diamond providers to this day.
Since the 1950s, advances in other technologies have improved Hall’s methods, and synthetic diamonds are now used in many electronic devices like laptops and cell phones.
The modern methods are able to create synthetic diamonds as large as 12 carats with much higher quality and clarity, allowing them to be sold for jewelry as well.
After his retirement, Hall became a tree farmer. He passed away at age 88 in July of 2008.
Liquid nitrogen has one of the lowest boiling points of any known substance at -321ºF, which is why anything that comes in contact with the substance is usually flash-frozen.
A substance’s boiling point varies with air pressure. For example, at sea level, water boils at 100ºC (212ºF). But at the top of Mt. Everest, where the air pressure is only about a third of what it is at sea level, water will boil at 71ºC (160ºF).
So as the air is sucked out of the vacuum, the liquid nitrogen’s boiling point drops below the substance’s temperature inside the vacuum, making it a superheated fluid. This superheated liquid nitrogen does some crazy things:
The evaporation of the nitrogen during boiling cools it back down until it freezes solid. In an attempt to align its molecules in a more tightly-packed pattern, all of the atoms will reorient themselves in a fraction of a second, causing cracks to spread quickly in fractal patterns across the solid nitrogen.
Liquid nitrogen isn’t just cool for science experiments. It’s widely used in every day life as a refrigerant for the freezing and transportation of food and as a coolant for superconductors. It’s even used to freeze off skin abnormalities like warts.
Dr. Christopher Keating is a former physics professor who taught at the University of South Dakoa as well as the U.S. Naval Academy. He is also author of the book “Undeniable: Dialogues on Global Warming”.
Recently, Keating posted a challenge on his blog: he offered $10,000 to anyone who could disprove man-made climate change using the scientific method. In the post, Keating said,
“I know you are not going to get rich with $10,000. But, tell me, wouldn’t you like to have a spare $10,000? After all, the skeptics all claim it is a simple matter, and it doesn’t even have to be original,” Keating wrote. “If it is so easy, just cut and paste the proof from somewhere. Provide the scientific evidence and prove your point and the $10,000 is yours! This is no joke. If someone can provide a proof that I can’t refute, using scientific evidence, then I will write them a check.”
Keating admits his bias, saying he’s sure he’ll never have to write the check because,
“The scientific evidence for global warming is overwhelming and no one can prove otherwise.”
But in response to those criticizing his ability to judge fairly because of his bias and his incentive to not lose $10,000, he had this to say:
“If I am a fraud, then I will be held up as an example of how climate scientists everywhere are frauds.”
Keating refuted the first submission because the data used by the skeptic was “cherry-picked” and only showed the last 14 years of average yearly temperature changes (in Celsius).
Keating responded by posting the same graph, but for the last 34 years, which showed a long-term upward trend.
The second submission was a little better. The submitter used data on naturally occurring climate change to argue that the current fluctuations aren’t a result of human activity. While Keating couldn’t dispute the data presented, he basically argued that just saying there was natural fluctuations in the past does not at all prove that the warming we’re now experiencing is natural.
In a recent interview with the College Fix, Keating added that the movement to deny man-made climate change is,
“…very similar to the one waged by tobacco advocates to deny a link between smoking and lung cancer.”
Keating is very confident in his findings and comes off as arrogant more than once in his responses (which I think detracts from his solid arguments). Also, the fact that he has the final say on whether a submission passes the test makes the competition somewhat rigged. However, he does do a great job of backing up his positions with solid data.
For anyone who wants to learn more about both sides of the argument, the exchanges between Keating and those refuting his claims are a pretty good place to start.
Worth noting: one of the biggest indicators of how we are affecting the climate is the amount of carbon dioxide in the atmosphere.
At the beginning of the industrial revolution, this number was at 300 parts per million. In the late 80s, it had risen to 350ppm. Now, carbon dioxide levels have risen above 400ppm for the first time in recorded history.
While levels that high have existed before (millions of years ago), the extremely rapid rise in carbon dioxide concentration over the last century is much faster than that concentration has ever risen in the past. Here’s a few reactions to that announcement from NASA scientists.