Monday, June 26, 2017

Heat Death

What is meant by “heat death”?

This is a topic that came up in class, but since it was only vaguely related to what we were actually talking about I did not take the time to explain it very well. Heat death is a theory about the ultimate fate of the universe. The second law of thermodynamics states that entropy increases over time in any closed system. In other words, available energy is always going to decrease unless there is energy being added. The earth has life, violent weather, and so on because the sun is constantly pouring energy into it, so the earth itself is not really a closed system. Meanwhile, however, the sun is slowly using up all of its energy, and one day it will burn out. Taken as a whole, the solar system is essentially a closed system (if you ignore the light, heat, and radiation that reaches us from the deep stretches of outer space). Taken as a whole, then, the universe itself is a closed system. As far as we know, the universe is using up energy and nobody out there is putting new energy into it. Therefore, the theory goes, the universe will eventually use up all the energy that is available to do work, and it will die cold and dark.

The notion of the universe as a closed system that is burning out raises an interesting question, related to the most interesting question that humans have ever asked: if the universe is constantly getting colder, and it has been in existence forever, why hasn't it burned out already? The obvious answer is that the universe has not existed forever. Scientists typically identify the Big Bang as the event that brought the universe into being. At the moment of the Big Bang event, the universe was at maximum energy. As time passes, the energy is used up, and it is decreasingly available to do work.

But what happened before the Big Bang? In a way, this is a paradoxical question. Time began at the Big Bang, and so there is no sense talking about what was there before. That does not stop people from trying to understand what could have happened. One popular theory is that there was a universe that existed before. It started with a Big Bang of its own, expanded and expanded, but then gravity slowly brought the expansion to an end, and then pulled that universe back together until it made a gravitation singularity, that then resulted in our Big Bang. According to this theory, the same fate awaits our universe, the universe that it is thus created, and so on forever.

Even though this theory is well known, possibly because it was popularized by Carl Sagan's Cosmos book series and television show, there are a couple of serious problems with it. First, there does not seem to be enough mass in the universe to get it to collapse back on itself. Maybe “dark matter,” which cannot be detected like other forms of matter, exists in sufficient quantities to allow the “Big Crunch” to take place. Second, the expansion of the universe is not actually slowing down, but accelerating! This is said to be the effect of something called “dark energy,” which scientists do not understand well. Nobody knows whether the universe is going to continue to expand faster and faster, all we have to go on is our observations, since the principle is poorly understood.

These two problems with the “Big Crunch” scenario leaves the question of how the universe has so much energy to begin with. Is there another principle that works against entropy? Some physicists believe in a steady state theory, which says that the universe as a whole does not operate by the rules of the second law of thermodynamics. There are various forms of this theory, but the proponents have not been able to observe phenomena that convincingly demonstrate their theories.

The “Big Crunch” theory connects to an interesting time travel paradox that I have been thinking about recently. Say that there is a giant war that makes it hard for the planet to sustain life. A few people, animals, and plants survive and find a way to eke out a living. Eventually civilization is rebuilt, and technology advances to a point where time travel is possible. A time traveler is convinced that it would be better if the catastrophic war had never happened, so she travels back in time to warn humanity about the cataclysmic effects of their military conflicts. Instead of doing what people usually do (continue on the road to self-destruction, only to realize after the fact that they had been foolish), they heed the warnings and destroy their extreme weapons.

The effect of this disarmament is that all the events that led to the birth of the time traveler and the creation of the time machine are erased. Of course this is a paradox, because now the warning can never come. But let's pretend that it doesn't happen. What happens instead is that the stretch of history from the time of the war up to the when the time traveler makes her trip becomes a type of “hypothetical loop.” That effect that all of those events have on the course of history is that it led to the time traveler, but other than that all of those events, people, everything that was built, made, dreamed, thought, and done is completely erased.

Now some people will speculate that history will continue now in two trajectories. The people who were left behind by the time traveler will continue in their own “universe,” while her arrival in the past creates a new trajectory that is, in effect, a different “universe.” Regardless of what happens to those people, everything that happens after she leaves is completely unknowable to the people she travels to save. Not only is it unknowable, though, it actually is a future that does not exist for those people who averted war. They eliminated that future when they chose peace over war.


This connects to the “Big Crunch” theory because, if this universe is crushed into nothingness to become a new universe, then everything that happens and in our future is a “hypothetical loop.” Not only will our existence not matter to the people of the next universe, there is no meaningful way in which we exist to those people at all, not even as a piece of their past. Time stopped and started again.

Other jobs

What would you do if you couldn't be a teacher?

I don't know. Here are some things that I would enjoy doing:

  1. Work with developmentally disabled adults. The problem with this as a career is that the pay is tragically low. There is definitely a lot of money invested by the government, parents, and others in the field, but very little of it trickles down to the people who actually work one-on-one with clients. Do you remember this past year when the minimum wage was increased, and they said that it would throw the industry into turmoil? It turns out that there are trained professionals who have been working in the field for decades who are not taking home $10 and hour!
  2. Write and perform puppet shows. I would particularly be interested in creating puppet shows that could be enjoyed by teenagers and adults. The problem with this sort of job is that I don't work well with fluid deadlines. I like daily deadlines so that I don't get too worried about whether I am doing enough work.
  3. Tech support. I have worked in this field before, and think that this is the sort of job I could actually get. The thing I liked least about working in this area is that I had to study every day to keep up with changes, and I felt that knowledge in this area is too ephemeral – you learn something only to have to replace that knowledge with something else. If I am going to study every day, I would rather study something timeless and meaningful.


The main requirements I have for a job are that I want to work with people I like on a personal level and admire on a professional level. That is why I am fortunate to work where I do.

Wednesday, June 21, 2017

Grades and Learning

Does having grades help or hurt students? (Because I heard of a school where going to class was optional and for the first two weeks students didn't go to class. After that they started going because they wanted to learn instead of being forced. I'm not sure if it's real, though.)

I have been curious about this for a long time, and I have taught classes that gave grades and others that didn't, so I have seen it from both sides.

Probably the most prominent critic of grades currently is Alfie Kohn. You might be interested in this article, in which he summarizes his analysis of the effect of grades on motivation. Kohn argues that giving grades makes students less interested in what they're learning, draws them to look for the easiest possible task, and reduces the value of their thinking. Students who are focused on getting better grades eventually come to see the grade and purpose of education, and they often lose a love of learning. Students at schools where grades are heavily emphasized, or who have parents who give them monetary rewards for grades, are more likely to cheat.

Kohn has some good insights and research to back up what he says, and I think that grades will often have a detrimental effect on students who are highly motivated to learn because of their own intrinsic curiosity. On the other hand, there are students who have low intrinsic motivation who will learn only if there is some extrinsic motivation. Since a modern classroom has students of all types of motivation, grades seem to be a necessary evil. Probably the best solution is to give grades, but not to focus on them as the sole goal of education. This is a hard balance to create, especially since parents have the central role in creating expectations, and so the school's role is limited. This summary of research has a good bibliography if you want to read further.

I used to teach non-credit classes at community college. Most of the pupils in these classes were senior citizens, but there were some younger people, too. Students are drawn to these classes for the purpose of learning, and there are no grades. I found these classes to be very enjoyable, and the students always did extra work to keep the discussions lively. If I could make a living teaching this type of class, I would enjoy it.

The difference between the students in a non-credit class and a public school setting is that the non-credit classes are completely voluntary, and the students are paying to attend. They therefore are made up of students who are intrinsically motivated. Since middle school and high school are compulsory, you have to develop the class to account for both types of students.

I should also say that I know of students who start school with a great deal of internal motivation, but who struggle in one or two areas makes it hard for them to get exemplary grades, which causes their love of learning to deteriorate. Students like this can often start to feel like they are not valued because they repeatedly get sub-optimal grades, and it is hard to engage intellectually with a community that you feel does not value you. Evaluating students on their “sense of wonder” and “depth of inquiry” is designed to combat this deterioration, but it is hard to prevent in all cases. It would be nice if we could give grades only to those students who are not intrinsically motivated, but that is not possible.

There are also students who come to school brimming with curiosity who DO do well in school, but who attach their ego so strongly to the notion of “being an A student” that they replace their love of learning with a desire to do better than other students in the class. I hope that they get their curiosity back when they are older and are not constantly being evaluated.


I think the school you are referring to is Summerhill, which is an English school that has been around for almost 100 years. Classes there are optional, but tend to be well attended. There have been schools in many countries, including the U.S., that are based on the program. The school has supporters and detractors, but there is no denying that there have been numerous successful graduates. The reason the school is not necessarily a good laboratory for determining the best education for everybody is that it is a very expensive boarding school, and the students who are there tend to come from families that value learning enough to select a school like that. In other words, it is a selected community that does not represent the variety of students that wind up in a public school (or public charter school).

Tuesday, June 20, 2017

Why Do We Study Latin and History?

I received several questions that center around the theme of why we learn Latin or history, when very few students will use them.

Students often have a utilitarian view of education; if a class can be applied to your career, then there is a good reason for taking it – otherwise, no. For example, if you were planning to repair watches for a living, then a bezel-polishing class would be just the thing; if you were planning to be a police officer, it would be a complete waste of time. Likewise, very few students are going to “use” Latin or history (although definitely those who are going into medicine or politics clearly might), so what is the point of taking them in school?

The beauty of a liberal education is that it is not strictly utilitarian in intent. The word “liberal” comes from the Latin word “liber,” which means “free.” You would know this if you had spent as much time studying Latin as you do kvetching about taking it. In this case, a liberal education was created for “free” citizens – meaning those who are wealthy enough that they are free from having to have a vocation. Therefore, the object of a liberal education is to learn for the sake of learning, not to hone skills that a person needs in the workplace.

The subjects of Latin and history are important to free citizens because they connect us with our roots; they allow us to explore the great ideas of the past and to come to a better understanding of the foundation of our culture. They also give us a peek into the human mind. Everybody knows who Julius Caesar was, but who can tell us what he thought? What assumptions did he make about the people he fought against? What was his understanding of fate? Nobody can really think like another person, but reading Caesar in Latin can – to a certain extent – give you the ability to step outside of yourself and come closer to understanding somebody from a time far removed from our time.

The questions that a scholar of Latin and history studies are important because they give us a better perspective on truth itself, not because they are particularly useful in a specific job.


That being said, a liberal education has been shown to be a great foundation for further education in all fields. Every year graduates from our school get accepted to the most prestigious and competitive schools in the world. The upper echelons of scientists, artists, writers, mathematicians, engineers, and entertainers are populated by numerous people who have a classical foundation. The Latin Language Blog put together an interesting little list of people in various fields who studied Latin. Some of them, such as J.K. Rowling, use Latin in their work, while others merely use the intellectual tools they developed in the study of the classics and apply them in other fields.

More on Video Games

Another issue with video games that is of particular concern to educators, particularly in a liberal-arts school, is that it may affect the ability to focus on tasks that require extended focus without immediate payoff, such as reading long books. Video games can be long and involved, but they tend to be exciting throughout. Some people speculate that the brain might become accustomed to constant excitement, and therefore less able to deal with long, dry passages. I could not find any good research on this issue; so far most of what I found was speculative and not scientific.

A similar issue that comes up is “video-game addiction.” The human brain can become dependent on constant stimulation, and this looks a lot like addiction in the brain. There is a lot of research on this going on right now, and it looks like it is another reason to put a limit on the amount of time you spend playing.

I think the reason people find video games so compelling is that the tasks you complete in a game are similar to tasks you would complete in the wild. My dog loves hunting, digging, and chasing. In the wild, this is how a dog would make a living – and she LOVES it. If she could spend her whole day hunting for animals, she would.

While not all people hate their jobs, it is rare to find people who love their jobs as much as my dog loves chasing rabbits. I think that this is because our jobs are so far removed from the jobs we had when we were hunting and gathering. Video games give us a chance to do the jobs that our brains are wired to do: looking for things, running from danger, hiding, chasing, throwing, fighting, and so on.


Now, we live in civilization, and we have to adapt to more civil ways of getting by in the world. Maybe the reason our brains become “addicted” to video games is because we have a natural proclivity towards that mode of existence. Wouldn't it be cool if we could find a way to make our actual careers more similar to the careers of our “wild” ancestors?

Effects of Video Games

Do video games have any benefits for things like on-the-fly thinking?

Before I attempt to answer this question, let me say that this is an area of inquiry that is interesting to many different people for varying reasons, so there are studies updating and refining our understanding of the effect of video games coming out all the time. What I write here is provisionally based on the research I have done so far.

Moderate video game playing has numerous important benefits. The most well-researched of these benefits is an improvement in hand-eye coordination. It is most noticeable in tasks that are similar to video-game activities, such as firing a weapon and operating vehicles. Young adults who play video games are more likely to score high on military sharp-shooter tests, for example.

Another important benefit is that children who play video games learn new schools more quickly. The practice of adapting to changing situations makes the brain more flexible. This article describes neuroimaging results that show that the brains of children who play video games are better developed in areas related to learning than in children who play no video games. This has a noticeable effect on academic performance, as moderate video-game players typically do better in school.

There are also some bad effects of video games, particularly among children who play excessively. The best known of these effects is that video gamers typically are more aggressive, defiant, and report more conflicts with their peers. Excessive video game playing can also lead to social isolation.

The take-away is that video games are good, but you should limit your exposure to them. All of the positive benefits of video games seem to be available to people who play as little as an hour a week. The bad effects are seen most dramatically in children who play more than nine hours per week. So, if you can keep your gaming to a little over an hour a day, you are likely to avoid serious problems. If you do notice that you are getting short of temper, you may want to cut back a little to see if the problem stops.


Tuesday, June 13, 2017

Can Giraffes Vomit?

Can giraffes throw up?  If so, how long does it take?

A quick internet search will reveal that this is a question that numerous people can ask, but few can answer.  One of the main problems is that giraffes are very picky eaters; they take leaves one-by-one and taste them thoroughly before sending them down the neck.  Therefore, they have little reason to vomit.  That being said, they do have the physical apparatus necessary to vomit.

Giraffes are ungulates, and so they have to regurgitate their cud to re-chew it.  Ungulates do vomit occasionally, but since they have four stomachs it is most common for them to vomit from one stomach into another one.  It is much rarer for them to expel food from their first stomach into the open air for us all to enjoy.  It does happen occasionally that a deer or a cow has to spew regurgitated food out of their mouths, but this has not been observed in giraffes.  If a giraffe did have to get rid of nasty food that way, it could simply use peristalsis and hack up food and spit it out.  Since a giraffe's stomach ferments food instead of breaking it down with acid, the result would not be repulsive vomit like people spew out, but more like a chunky mass of fermented leaves and grasses.  Quite pleasant, actually.  By comparison, I mean.  Also, it would not spray forth like our puke, because the giraffe does not have the muscles necessary to violently expel their vomit like that.  There are a lot of positive perks to being a giraffe, but having the ability to spew a glorious rainbow of regurgitated food onto the heads of the members of the more altitude-challenged species is not one of them.  Sorry.

How long would it take?  Not very long at all.  If you have the time, it is possible to watch riveting videos of giraffes hacking up their food.  The cud comes up in a big wad, so you can watch it travel up the neck into the mouth.  Even with their enormous necks, the process takes between 1 1/2 seconds to about 3 seconds.

Tuesday, July 5, 2016

Other Causes of Hallucinations

I should have mentioned that hallucinations are also frequently related to cerebral cortex problems such as epilepsy and migraines.  About 1/3 of migraine sufferers see auras preceding the onset of the headache. 

In addition, people frequently report having hallucinations as they are falling off to sleep, known as "hypnopompic hallucinations."  They can be frightening, particularly if they are accompanied by sleep paralysis (as they often are).  These are often images of faces floating over the person.


Illusions and Hallucinations

Why do we sometimes see things that are not there?

The phenomenon of “seeing things” comes in two varieties: illusions and hallucinations. If you see something but misinterpret what you are seeing, it is an illusion. If you see something that is unrelated to visual stimuli, then that is a hallucination.

There are, of course, non-visual illusions and hallucinations. Any sense that you have can be fooled. You can learn how to experience some tactile (sense of touch) illusions by clicking here.  An example of a tactile hallucination would be the phantom limb phenomenon, in which people who have had an amputation still feel the missing limb.  I am now experiencing another type of tactile hallucination caused by neuropathy in my left foot resulting from Achilles' tendonitis.  My left toes and the front part of my foot constantly feel as if they are in heated water.  For the most part it is not too unpleasant, although I miss feeling other sensations in my toes.  Another sense that is often subject to illusions and hallucinations is the sense of smell.  People often misidentify smells (illusion) or smell things that simply are not there (hallucination).

Visual illusions are caused by misinterpreting visual stimuli.  Since humans are visual animals, we make sense of our world primarily based on what we see.  We are constantly taking in images and trying to fit them into the world we construct in our mind.  Every time we see something new, we tend to compare it to things that we already know to make sense out of it.  So, for example, visible heat waves rising in the distance might be interpreted as water in classic illusion known as a "mirage."  Since faces are so important to our lives we tend to see faces even in inanimate objects, which is another classic type of illusion called "pareidolia."  (Strictly speaking, pareidolia does not merely mean seeing faces, but seeing the familiar in the unfamiliar - but we usually use it for seeing faces.)  There are many websites that give cool examples of pareidolia.

The reason behind seeing illusions is pretty well understood.  People will try to make sense of the world in accordance to learned patterns, and if something does not fit into those patterns, we try to make it fit.  For some cool examples and an explanation of how our mind tricks us, click here.

Hallucinations are less well understood.  Hallucinations can happen at any time, but most frequently are caused by physiological conditions brought about by stress, exhaustion, drug intake, starvation, repeated rhythmic activity, etc.  Hallucinations are typically individual phenomena, although there are many interesting cases of shared hallucinations related to "mass hysteria."

Visual hallucinations are a fertile field for investigation, and scientists still debate what exactly is going on during hallucinations.  The famous neurologists Oliver Sacks dedicated much of his life to understanding hallucinations, and his work was still in progress when he died.

That being said, scientists are pretty sure that hallucinations are related to dreams.  The physiological impetus that leads to hallucination seems to allow the brain to dream while you are still awake.  In other words, you see the dream at the same time that you can interact with the ordinary world.  Most people who are hallucinating can tell which are the hallucinatory images and which are not, although in cases of severe schizophrenia or under the influence of certain drugs (such as the daturas), the person is unable to distinguish.  It is not unusual for people who recover from an episode of such hallucination to have little or no memory of it, much as people rarely have memories of their dreams.

In fact, it seems that forgetting dreams is necessary for maintaining a healthy sense of reality.  If people are consistently awoken during their dreams and made to write down what they dreamed, they can start to show symptoms of schizophrenia.  Schizophrenics often can remember their own dreams, and can't always tell whether their memories were of things that actually happened.

Finally, let me take a minute to implore you not to experiment with any of the daturas.  They are extraordinarily poisonous, and people who have used them "recreationally" report the experience as extremely unpleasant.

Flatulence

Why do farts stink?

I am answering this unsavory question mostly because the student who asked it asked me to answer it, and he was a student who demonstrated an admirable spirit of inquiry throughout the year.

People have a symbiotic relationship with numerous bacteria that live in our intestines. In fact, there are more bacteria cells in our bodies than human cells. Many of these bacteria live in our intestines and help is to break down food. There are many gasses produced as a byproduct of these processes, and farting (flatulence) is our main way of expelling these gasses. It has long been debated which of these gasses were most responsible for the smell, but recent research has shown that it is caused by hydrogen sulfide and other volatile sulfur compounds. This is why eating high-sulfur foods such as cabbage or eggs can result in stinky flatulence.

There is also the question of why people find sulfur compounds so malodorous. Flatulence does not have much sulfur in it, but we are able to smell it quite easily. Most likely our sensitivity to volatile sulfur compounds is adaptive because it helps us to avoid eating rotten meat and eggs.

There are a couple of other interesting compounds that contributes a little to the smell of flatulence, but quite a bit to the smell of feces: skatole and indole, both of which are produced by the bacterial breakdown of tryptophan in the intestines. In small amounts these compounds have a flowery fragrance, and they are found in many flowers – most notably jasmine and orange blossoms. In larger amounts they smell like feces. They tend to be most prominent in the feces of carnivores, and that is one of the reasons carnivore feces generally smells worse than that of herbivores. (Another reason is that many carnivores have glands that add distinctive aromas to their feces to help them mark their territories.)

One of the most interesting characteristics of skatole and indole is that they make smells linger. This is why it is so hard to get the stink off of your shoes when you step on dog poop. It is also the reason that people who make perfume add these chemicals to their products. If you make your own fragrance from pure essential oils, it will be strong at first, then will quickly fade. Commercial perfumes can last all day, largely due to the addition of skatole and indole.

I once speculated that animals that harbor a large amount of skatole- and indole-producing bacteria do so because it is adaptive; it would make the territory-marking characteristic of their feces last longer. I cannot find any research to back up my hunch, and there seems to be some animals (Homo sapiens, for example) that produce large amounts of skatole but who are not known to mark their territories this way (although we may have in our past).


Wednesday, June 22, 2016

Hisatsinom

What happened to the Anasazi?

This is an enduring question, and I become less certain the more I look into it.

The term “Anasazi” is derived from a Diné (Navajo) term that means “ancestor of my enemy.” Since this is an exonym (a term applied by an outside group) that is often considered derogatory by their modern descendants – the Pueblo Indians – it is now more common to refer the Anasazi as “Ancestral Pueblo People” or “Ancestral Puebloans.” Of course, the term “Pueblo” originated with the Spanish, so I think it is best to use the Hopi term “Hisatsinom” – which means “ancient people” – although other Pueblos have their own terms.

The Hisatsinom lived in communities in the Four Corners region from around 1200 BCE to 1250 CE. Their architecture varied quite a bit, but the most distinctive characteristic of their towns and cities is the kiva, which likely served a ceremonial role similar to that served by the kiva among modern Pueblos. Various settlements had housing that included pit houses, elaborate pueblos, and cliff dwellings. They often used stone for their houses rather than the adobe that was common later, and many of their structures are well preserved.

What happened to the Hisatsinom is relatively easy to determine: they left their communities and settled in the nearby pueblos, which were already populated by related peoples. This explanation is satisfactory because it is consistent with traditional stories of the Puebloans and with the archaeological record.

The place where it gets sticky is when you try to find out why they did this. For decades, the story has been that Hisatsinom settlements were founded during a period in history with above-average rainfall. Since they relied on dry-land farming (farming without elaborate irrigation systems), their lives became increasingly difficult as the wet period came to an end in an event called the “Great Drought.” As they found it harder and harder to produce adequate food, the Hisatsinom slowly left their settlements and moved to the pueblos, which were close to major rivers and reliable seasonal streams.

There are a couple of problems with the Great Drought theory. At first, tree-ring data seemed to indicate an enormous drought that came without precedent. Further research indicates that the Hisatsinom had survived many droughts before, and it is not clear that they were experiencing widespread famine at this time. Also, the Hisatsinom had actually started to leave before the Great Drought, which indicates that something else might have been happening.

The departure from the Hisatsinom settlements was also accompanied by other changes. Stone habitations were closed up or dismantled. Pueblo tradition holds that their ancestors had found ways to manipulate the weather, and that this brought unexpected and disastrous consequences. They tried to reverse these changes by destroying their sacred buildings.

Some recent research finds evidence of warfare and cannibalism that began a few decades before the towns and cities were abandoned completely. Some people speculate that this was caused by the scarcity of food, but there are other possibilities. Christy and Jacqueline Turner propose the possibility that Meso-American invaders may have taken over the Hisatsinom territories. They required offerings of food that would be stored in Great Houses (large, centrally located structures that seem to have had ritual and storage uses). The Turners proposed that the ritually prepared human bones that were found at several sites could be victims that were offered as tribute to the invaders, who had established themselves as the rulers. In this view, the Hisatsinom left because they were escaping.

The Turners' theory has not been universally accepted by scientists, and most Pueblo leaders have rejected it. Anthropologists claim that there are other reasons to ritually prepare bones. Most Pueblo elders say that the Hisatsinom were peaceful, and they had no tradition of cannibalism. That being said, I have a vague memory of a man who came from San Ildefonso Pueblo to tell us stories at campfire program at Bandelier National Monument. He said that the horrors that took place when the Hisatsinom were practicing black magic before they dispersed were “unspeakable.” If I remember what he said correctly, it seems possible that there is some oral tradition that is consistent with cannibalism and warfare.

Among the descendants of the Hisatsinom there are many stories about the reason they migrated to the pueblos, but the best known is that told by the Hopi. They say that the Hisatsinom left because they had a spiritual dedication to a life of movement. They started to experience bad luck caused by staying in places that were meant to temporary, so they left to respect the practices of their ancestors.





Tuesday, June 21, 2016

Wheel of Fortune, Ficino



What is the medieval Wheel of Fortune? Who was Marsilio Ficino?

I shall attempt to answer both of these questions, unrelated as they are, in a single essay. For the record, the latter question was asked by a student who should have known who Ficino was.

While there are several important medieval figures who are little known today, the two whose reputations have diminished the most are Boethius (who wrote very early in the middle ages – the sixth century), and Ficino (who wrote late in the middle ages, and helped inspire the Renaissance). Both of these philosophers were quite famous in their times, but today few people could tell you who they are.

Boethius was an important member of the court of Theoderic the Great, the Ostrogothic ruler of Italy. Boethius was a Nicene (Trinitarian) Christian, while Theoderic was an Arian – meaning he was a member of a branch of Christianity who believed in a type of Trinity, but in which the Father was superior to the Son. Even though the time in which Boethius lived was characterized by conflict between Nicene Christianity and Arianism, Boethius and Theoderic were very close until Boethius stood up for another member of Theoderic's court who was accused of treason for corresponding with the Byzantine Emperor Justin I. This resulted in Boethius himself being accused. He was imprisoned and eventually executed.

While he was in prison, he wrote The Consolation of Philosophy, which is written in the form of a dialogue between Boethius and Sophia (“wisdom”). The Consolation starts with Boethius complaining of the injustice of his situation. Sophia tells him that he should not despair because nobody knows the will of God. As long as people are subject to fate, they will undergo injustice. It is useless to rail against the ways of the world; trust in God and live a life of virtue, and you will know true consolation.

In order to illustrate the unexpected whims of fate, Boethius reintroduced the ancient concept of the Wheel of Fortune (Fortuna). Fortuna was a Roman goddess of fate. She was shown turning a wheel with people moving up and down in a circle. As people move up in the wheel, they encounter unexpected luck and happiness, but they inevitably will start to go down after they reach the pinnacle. Boethius makes it clear that all people – rich and poor, virtuous and wicked – are subject to these turns of fate. Since there is nothing a person could do to change one's own fate, the correct attitude to have to misfortunes and inequalities is tranquil acceptance. Even though the Wheel of Fortune was known before Boethius wrote about it, it seems that the widespread appearance of the wheel in medieval art is due to the influence of the Consolation of Philosophy.

Boethius' writing was well-known in the middle ages, and his doctrine of stoic resignation was widely accepted in those times. The striking inequalities and injustices of the middle ages were largely placidly accepted throughout much of the middle ages. Peasant uprisings were quite rare until the Black Death led people to question the propriety of the status quo.

The philosophy of Marsilio Ficino was quite different than that of Boethius, although both drew heavily on Plato. During most of the middle ages, art and music were mostly commissioned by the Church, and the purpose was to communicate religious lessons. While the notion that medieval artists did not think art should be beautiful is an exaggeration, beauty was not the prime objective of art. Ficino revived the classical notion that beauty was the true objective of poetry, painting, music, and other arts. While Ficino was a Christian who believed that sublime experiences brought about by experience of beauty would bring one closer to God, his writings renewed interest in the philosophies of pre-Christian thinkers – particularly Plato.

Ficino's influence is clearly seen in the art of the Renaissance. Beauty – particularly the beauty of the human figure – again became the object of art. Artists became the most well-known figures of the age, and churches, cities, and wealthy individuals competed to commission the most striking works of art.

The theme that ties these two philosophers together is that we can see the influence their philosophies had on their respective eras even though they, as individuals, are mostly forgotten.



Thursday, June 16, 2016

Comments

I changed the settings on this blog so that you do not need to sign in to leave comments.  Please keep it clean and be nice!

Tuesday, June 14, 2016

Sloths

What is your favorite type of sloth - two-fingered or three-fingered?

There are currently two families of sloths and six species.  While it is customary to designate the two families as "two-toed" and "three-toed," in fact all sloths have three toes on each foot.  The distinction is in their fingers - so it is good that the student who asked this question did not make that mistake.

People typically are most familiar with three-fingered sloths, and they love the sort of black mask they have on their faces.  Two-fingered sloths are less flashy in appearance.  They typically have brown fur that is lighter in color on their faces.  They have a bunch of moss growing on their fur that they graze on from time to time.  The fur also provides a habitat for moths.  Recently scientists have discovered that the moths provide nutrients for the moss, which then is more nutritious when the sloths eat it.  In exchange, the moths get a nice, warm home and some protection from predators.  Sloths are fascinating animals.

I personally met with some two-fingered sloths at the Zoological Wildlife Center in Rainier, OR.  For the most part, they were pretty friendly, although I did spend quite some time getting a particular old grump to warm up to me.  My family had a better experience feeding and petting some more personable sloths.

I usually say that I prefer the two-fingered sloths.  Mostly, I like them because they look cool, they are less popular, they are friendlier (I think), and Mr. Sloth (my puppet) is a two-toed sloth.

Recently, though, I have developed a soft spot for a particular species of three-fingered sloth: the pygmy sloth.  This is the most endangered of all the sloths.  These sloths are endemic only on one island near Panama.  They have coloring similar to that of other three-toed sloths, but are much smaller.  An adult pygmy sloth is 19 - 21 inches long.  I like these sloths because they are small, cute, and people need to know about them so we can try to save them.

Africanized Bees

How do bees become Africanized?

There are many different varieties of honeybees (Apis mellefera), and any of them can hybridize if they are put together.  While there are native bees in the Americas, the honeybees that are kept for pollination and honey production are all imported.  The most common variety is the European honeybee.  These bees are pretty hardy, but they are best adapted to cooler climates that one finds in the tropics.  Their honey production and the stability of the hives is lower in the hotter areas of the Americas.

In the 1950s, a biologist named Warwick E. Kerr, tired of merely being a guy with a cool name, decided he would interbreed bees from Africa that were adapted to a hot climate with honey-producing European bees.  He was working in Brazil, and he figured that he could benefit from the combined traits of these two bees: the large capacity for creating honey of the European bees, and the heat tolerance of the African bees.

The bees that Kerr created were good at tolerating heat.  They were also more aggressive than European honeybees, more likely to swarm, and more persistent in pursuing invaders.  In fact, they can chase an animal a quarter of a mile.  This is pretty good stamina (although you can still outrun bees if you keep it up for long enough - click here if you don't believe me).  Kerr managed these hives as an experiment, and he did not intend for them to be set free in the wild.  He put a special type of screen on the hives called "queen excluders" on the hives.  These are meant to allow the worker bees to enter and leave the hives, but hold the queens and drones - who are larger - in the hive.  This will keep the hives intact and prevent swarming.  Somebody removed the excluders, and 26 colonies swarmed and started to spread.

Since the bees were quite suited to the tropical heat of Brazil, they started to spread.  Due to their aggressiveness, people became afraid and started calling the Africanized honey bees "killer bees."  There was a lot of sensationalism about the possible spread of Africanized bees, including a popular movie in 1974 called Killer Bees.  It is still common to hear people use the term "killer bees," although most bee experts object to the term.  Africanized bees are more aggressive than regular honey bees, but bees who do not feel threatened are not a danger to anybody.  There are definitely precautions you should take around bees - particularly near their hives - but you have probably seen hundreds of Africanized bees without incident.

Africanized bees did spread, and they are quite common in the southern United States.  Exactly how common they are is a matter of some debate.  A recent study in California showed that among managed colonies, 13% have DNA of African bees, while the percentage of non-managed bees who are at least partially Africanized is over 60%.  Thorough research has not been done on bees in Arizona, but most bee experts believe the percentage will prove to be higher here.

Africanized bees have likely reached their northernmost boundary by now.  They thrive in places with mild winter temperatures because they do not store enough food to make it through a long, cold winter.  In places like Arizona, though, it is likely that bees will increasingly show Africanized traits because they are so suited to hot climates.

Bees become increasingly Africanized in three different ways:

1.  Existing hives will swarm and form new colonies;
2.  Their drones will interbreed with European honeybee queens;
3.  and they will take over existing honeybee hives.

This last method is fascinating, and it was only recently discovered.  Africanized bees can go into an existing colony, kill their queen, and replace her with one of their own queens.

Each colony of bees has its own personality, and there are Africanized colonies that are relatively docile.  Since Africanized bees are so well adapted to our environment, maybe the best way to deal with their increase is to replace hostile queens with less aggressive ones.   The colony will quickly take on the personality of their new queen.  





 

Friday, June 10, 2016

Favorite Color

What's your favorite color?

 There were a lot of questions like this.  Even though I dress in a drab manner, I love colors.  My favorites are blue, green, and purple.


Tutankhamun's Tomb

Do you have an update on the secret chambers in Tutankhamun's tomb?

Yes, I do.  A couple of students asked about this, and it is gratifying that they care about these historical mysteries.

It has long been known that many of the grave goods were not originally crafted for Tut, but were re-purposed goods made for someone else.  Nicholas Reeves, a highly-regarded Egyptologist from the University of Arizona, claims that most of the grave goods were originally for a mysterious female pharaoh known as "Neferneferuaten," who was probably Nefertiti.  The most famous artifact from the tomb, Tut's death mask, was clearly retrofitted by cutting the face off and replacing it with Tut's face.  Reeves found evidence that there was originally an inscription inside identifying Nefertiti as the owner; that inscription was scratched out and replaced with Tut's name.

Other people have pointed out that Tut's tomb itself seems to have originally been made for someone other than Tut.  They had to cut into the walls to make room for the sarcophagus, and the shape of the tomb is more typical of a queen's tomb than a king's.  Since we haven't located Nefertiti's tomb, some people speculate that Tut's tomb was originally meant for her, and that she may have been hastily buried somewhere else (or some other random thing happened to her mummy).

Reeves developed a theory that the tomb is actually Nefertiti's, and  that she is not buried elsewhere - but she is still inside that tomb!  He looked carefully at images of the walls and noticed evidence that there might be hidden chambers behind a wall in the same room where Tut's sarcophagus was.  He started to believe that the ancient Egyptians took an existing tomb and built walls to retrofit it into a new tomb for Tut.

Evidence to support Reeves' theory mounted earlier this year when Japanese radar expert Hirokatsu Watanabe used ground-penetrating radar to show that there may be hidden chambers.  Watanabe said the images shows the presence of metals and organic material within the chambers, which obviously would be consistent with a hidden tomb.

Some Egyptian tourism officials were excited about these findings, since visits to Egypt have dropped off as a result of political turmoil.  Perhaps a major discovery would be able to bring people back.

Other experts were not convinced.  Former Egypt Antiquities Minister Zahi Hawass was skeptical of the claims, and other radar experts interpreted Watanabe's scans as being inconclusive.  In March and April of this month there was a large, well-coordinated effort to duplicate Watanabe's results.

Egyptian officials stalled on issuing the results, and they have been reticent to publicize them, mainly because the more detailed scans indicate that there are no hidden chambers.

As for me, I still hope they find something, but I doubt it.  I am especially disappointed in further research into Watanabe's scans.  He enthusiastically claimed that there almost certainly were hidden chambers, although most experts say that his scans don't show any evidence.  Watanabe has an eccentric way of interpreting results that he refuses to explain to anybody.  It seems Watanabe may have been using this as a means of self-promotion and not as a serious attempt to further scientific inquiry. 

Thursday, June 9, 2016

21

Why is the drinking age 21?  Where did the 21 come from?  It seems like an arbitrary age.

In 1984, Ronald Reagan signed the National Minimum Drinking Age into law.  This law told the states that if they did not change the drinking age to 21 they would lose 10% of their federal highway funding.  Eventually all the states fell into line, and this led to a reduction in the number of drunken-driving deaths, so it seems the law had a good effect.  I know you may be wondering why a person who found this sneaky way of forcing states into changing their laws to align with federal policies - even though the 21st amendment defers the power to regulate alcohol to the states - is considered to be the hero of limiting federal control over the states, but I don't want to argue about it.

The question here is how did 21 come to be seen as the age of majority in this case.  Why not 20 or 22?  The absolute origin of 21 as a magic age is lost to antiquity, but it is clear that it came to our country from English Common Law, which was once the law of land here in the states, where our legal heritage originates with the original 13 colonies, which were all bound by British law.

You may ask, What is English Common Law?  A quick answer two this is that the British do not have a constitution like we have: a document that was written down and you can go and read.  Their law consists of statutes that have been passed by kings and parliaments throughout history  (known as "statute law"), and various legal judgments and conventions (known as "common law").

English common law gave people increasing responsibility as they aged.  Common law took into account the fact that girls mature more quickly than boys, so they do not always get rights at the same age.  Men began to gain rights at the age of 14, while girls would begin at 12. For example, a man could sign a contract at 14, but could choose not to ratify it at 21, even though an adult who entered a contract with a minor was bound to follow it.  It raises the question of why anybody would enter a contract with a child since the child could so easily abrogate it, but that is the law.  A man could marry without parental consent at 21.  An interesting rundown of various rights by age can be found here; I can't vouch for its accuracy, but it is pretty interesting.

Outside of England and its colonies, the age of 21 doesn't have the magic power as it has here.  There are a handful of countries that have a minimum drinking age of 21.  All of these countries are either former British colonies, Islamic countries who have additional restrictions concerning who can purchase alcohol, Pacific islands that have a history of close ties with the United States.  Taking a world-wide perspective, a drinking age of 21 is very rare.  18 is the most common, although some places have 19 or 20.  Because of our success in reducing accidents by raising the drinking the age, there have been attempts in places like New Zealand, Australia, and the Philippines to increase the drinking age, but we are still relative outliers.    

An interesting sidenote is that the original Selective Service Act of 1917 required men to register at 21.  The age was lowered to 18 in 1918.  Since some people object to men being required to register at 18 even though they are not allowed to drink until 21, maybe Selective Service should be changed back.





Tuesday, June 7, 2016

Allergies

How did ancient people fight off the first allergies?

This is a good question, and there is not much information.  In fact, there is not much mention of hay fever type allergies in ancient sources, although ancient people were aware of allergic reactions to animals and foods.  According the Psychoneuroimmunology by Robert Ader, the earliest report of an allergic reaction was King Menses of Egypt, who died of a wasp sting some time between 3640 - 3300 BCE.  Ader also reports that Britannicus, the son of emperor Claudius, was so allergic to horses that his eyes would swell shut and he couldn't see where he was going.  In these cases, the remedy was to avoid horses as much as possible.

Hay fever -  which is an allergic reaction to dust, mold, and pollen -  was not formally described until 1906.  This does not mean these did not exist (although see below), just that it was not recognized as a specific phenomenon.  The ancients did know about breathing problems (asthma), and there are numerous treatments mentioned in ancient sources.  These fall into two categories - inhaling steam that has been scented with herbs, and taking tea that includes stimulants known to cause bronchodilation (particularly ephedra in China).  Both of these methods are somewhat effective.

For a while, I lived in a place called Johnson Mesa, New Mexico.  I never figured out what it was, but I was extraordinarily allergic to something there.  I read in a book that a common folk remedy for allergies was to take the resinous sap of a plant and paint that inside your nose.  Supposedly this remedy went all the way back to Plains Indians.  The theory is that the sap causes pollen to stick so you don't inhale it.  It did not work so well.

While we find that there are symptoms described in ancient sources that are similar to allergies, it definitely was not very important to them.  In fact, research shows that allergies are much more common among industrialized nations, and that they are becoming more and more common as people become increasingly urbanized.  The cause for this seems to be related to the reduced incidence of parasites.  Our immune system evolved to fight off parasites, and if you do not have parasites it will turn against things like pollens and molds.  Allergies are essentially an over-reaction to these relatively harmless irritants.  People can reduce their allergies by infecting themselves with parasites.  If you are interested in ordering some hookworms, click here.   There is an interesting article about parasites and allergies from Smithsonian Magazine that you can read here.