Beasts of Ephesus

Learn. Something.

Archive for the ‘Culture’ Category

Are People Who Believe in a “Higher Power” Happier?

Posted by jase on August 23, 2009

Researchers accidentally discovered that people with religious beliefs tend to be more content in life while studying an unrelated topic. While not the original objective, the recent European study found that religious people are better able to cope with shocks such as losing a loved one or getting laid off of a job.

Professor Andrew Clark, from the Paris School of Economics, and co-author Dr Orsolya Lelkes, from the European Centre for Social Welfare Policy and Research, analyzed the a variety of factors among Catholic and Protestant Christians and found that life satisfaction seems to be higher among the religious population. The authors concluded that religion in general, might act as a “buffer” that protects people from life’s disappointments.

“We originally started the research to work out why some European countries had more generous unemployment benefits than others, but our analysis suggested that religious people suffered less psychological harm from unemployment than the non-religious,” noted Professor Clark. “They had higher levels of life satisfaction”. Data from thousands of European households revealed higher levels of “life satisfaction” in believers.

Professor Clark suspects that a variety of aspects are at play, and that perhaps a “religious upbringing” could be responsible for the effect, rather than any particular religious beliefs. The researchers say they found that the religious crowd tended to experience more “current day rewards”, rather than storing them up for the future. Previous studies have also found strong correlations between religion and happiness.

The idea that religion may offer substantial psychological benefits in life, is in sharp contrast with another common viewpoint that religion is repressive and has a negative influence on human development. Professor Leslie Francis, from the University of Warwick believes that the benefit might involve the increased “purpose of life” experienced by many believers that may not be as strongly felt among nonbelievers.

“These findings are consistent with other studies which suggest that religion does have a positive effect, although there are other views which say that religion can lead to self-doubt, and failure, and thereby have a negative effect,” said Francis. “The belief that religion damages people is still in the minds of many.” Terry Sanderson, a leading UK secularist, gay rights activist and president of the National Secular Society, said that any study describing a link between happiness and religion is “meaningless”. “Non-believers can’t just turn on a faith in order to be happy. If you find religious claims incredible, then you won’t believe them, whatever the supposed rewards in terms of personal fulfillment,” he said. “Happiness is an elusive concept, anyway – I find listening to classical music blissful and watching football repulsive. Other people feel exactly the opposite. In the end, it comes down to the individual and, to an extent, their genetic predispositions.”

While no one would argue that genetics don’t influence one’s disposition, Justin Thacker, head of Theology for the Evangelical Alliance, says that there are definitely other factors worth considering. He says a belief in God increases one’s feeling that life is meaningful. “There is more than one reason for this – part of it will be the sense of community and the relationships fostered, but that doesn’t account for all of it. A large part of it is due to the meaning, purpose and value which believing in God gives you, whereas not believing in God can leave you without those things.”

Previous studies have concluded that humans are biologically predisposed to believe in God. Historically, most cultures have developed some sort of religious belief that included at least some form of a “higher power”. From an evolutionary and psychological perspective, these questions have intrigued scientists for decades, but the physiological and cognitive study of religion is still relatively young. Both believers and non-believers can agree on the scientific findings, and still interpret it quite differently notes Ian Ramsey Centre for science and religion in the University of Oxford researchers who are currently working on a project to better understand the cognitive science of religion.

“One element of the current project is to develop philosophical and theological treatments of what the findings from cognitive science of religion means for various theological positions,” states the Cognition, Religion and Theology Project outline. “ “One element of the project is scientifically explaining not just belief in gods but why some people become atheists.

If scientists can explain why people tend to believe in gods and also why other people tend to believe there are no gods, then surely the presence of a scientific explanation cannot mean that you should not believe one way or the other just on the presence or possibility of such an explanation. Non-believers might find satisfaction in a sound scientific explanation of why people tend to believe in God because they can now account for why people persist in believing in a fictitious being. The believer might find satisfaction in the scientific documentation of how human nature predisposes people to believe in God because it could reinforce the idea that people were divinely designed to know and believe in God. Both believers and non-believers can agree on the scientific findings.”

Posted by Rebecca Sato.

Source: Daily Galaxy

Advertisements

Posted in Culture, Psychology, Religion | Tagged: , , , , , , , , , | Leave a Comment »

Get to Know: Outsider Art

Posted by jase on July 27, 2009

The term outsider art was coined by art critic Roger Cardinal in 1972 as an English synonym for art brut (French: [aʁ bʁyt], “raw art” or “rough art”), a label created by French artist Jean Dubuffet to describe art created outside the boundaries of official culture; Dubuffet focused particularly on art by insane-asylum inmates.

While Dubuffet’s term is quite specific, the English term “outsider art” is often applied more broadly, to include certain self-taught or Naïve art makers who were never institutionalized. Typically, those labeled as outsider artists have little or no contact with the mainstream art world or art institutions. In many cases, their work is discovered only after their deaths. Often, outsider art illustrates extreme mental states, unconventional ideas, or elaborate fantasy worlds.

Outsider art has emerged as a successful art marketing category (an annual Outsider Art Fair has taken place in New York since 1992). The term is sometimes misapplied as a catch-all marketing label for art created by people outside the mainstream “art world,” regardless of their circumstances or the content of their work.

In 1991, the first and only such organization dedicated to the study, exhibition and promotion of outsider art was formed in Chicago: Intuit: The Center for Intuitive and Outsider Art. Chicago is often recognized for its concentration of self taught and outsider artists, among them — Henry Darger, Joseph Yoakum, Lee Godie, William Dawson, David Philpot, and Wesley Willis. Intuit maintains a non-profit museum, open to the public, which features exhibitions of art by intuitive, outsider, and self taught artists.

The term outsider art was coined by art critic Roger Cardinal in 1972 as an English synonym for art brut (French: [aʁ bʁyt], “raw art” or “rough art”), a label created by French artist Jean Dubuffet to describe art created outside the boundaries of official culture; Dubuffet focused particularly on art by insane-asylum inmates.

While Dubuffet’s term is quite specific, the English term “outsider art” is often applied more broadly, to include certain self-taught or Naïve art makers who were never institutionalized. Typically, those labeled as outsider artists have little or no contact with the mainstream art world or art institutions. In many cases, their work is discovered only after their deaths. Often, outsider art illustrates extreme mental states, unconventional ideas, or elaborate fantasy worlds.

Outsider art has emerged as a successful art marketing category (an annual Outsider Art Fair has taken place in New York since 1992). The term is sometimes misapplied as a catch-all marketing label for art created by people outside the mainstream “art world,” regardless of their circumstances or the content of their work.

In 1991, the first and only such organization dedicated to the study, exhibition and promotion of outsider art was formed in Chicago: Intuit: The Center for Intuitive and Outsider Art. Chicago is often recognized for its concentration of self taught and outsider artists, among them — Henry Darger, Joseph Yoakum, Lee Godie, William Dawson, David Philpot, and Wesley Willis. Intuit maintains a non-profit museum, open to the public, which features exhibitions of art by intuitive, outsider, and self taught artists.

Jean Dubuffet and art brut

French artist Jean Dubuffet was particularly struck by Bildnerei der Geisteskranken and began his own collection of such art, which he called art brut or raw art. In 1948 he formed the Compagnie de l’Art Brut along with other artists, including André Breton. The collection he established became known as the Collection de l’Art Brut. It contains thousands of works and is now permanently housed in Lausanne, Switzerland.

Dubuffet characterized art brut as:

“Those works created from solitude and from pure and authentic creative impulses – where the worries of competition, acclaim and social promotion do not interfere – are, because of these very facts, more precious than the productions of professionals. After a certain familiarity with these flourishings of an exalted feverishness, lived so fully and so intensely by their authors, we cannot avoid the feeling that in relation to these works, cultural art in its entirety appears to be the game of a futile society, a fallacious parade.” – Jean Dubuffet. Place à l’incivisme (Make way for Incivism). Art and Text no.27 (December 1987 – February 1988). p.36 Dubuffet’s writing on art brut was the subject of a noted program at the Art Club of Chicago in the early 1950s.

Dubuffet argued that ‘culture’, that is mainstream culture, managed to assimilate every new development in art, and by doing so took away whatever power it might have had. The result was to asphyxiate genuine expression. Art brut was his solution to this problem – only art brut was immune to the influences of culture, immune to being absorbed and assimilated, because the artists themselves were not willing or able to be assimilated.

Cultural Context

The interest in “outsider” practices among twentieth century artists and critics can be seen as part of a larger emphasis on the rejection of established values within the modernist art milieu. The early part of the 20th Century gave rise to cubism and the Dada, Constructivist and Futurist movements in art, all of which involved a dramatic movement away from cultural forms of the past. Dadaist Marcel Duchamp, for example, abandoned “painterly” technique to allow chance operations a role in determining the form of his works, or simply to re-contextualize existing “readymade” objects as art. Mid-century artists, including Pablo Picasso, looked “outside” the traditions of high culture for inspiration, drawing from the artifacts of “primitive” societies, the unschooled artwork of children, and vulgar advertising graphics. Dubuffet’s championing of the art brut — of the insane and others at the margins of society—is yet another example of avant-garde art challenging established cultural values.

Terminology

A number of terms are used to describe art that is loosely understood as “outside” of official culture. Definitions of these terms vary, and there are areas of overlap between them. The editors of Raw Vision, a leading journal in the field, suggest that “Whatever views we have about the value of controversy itself, it is important to sustain creative discussion by way of an agreed vocabulary”. Consequently they lament the use of “outsider artist” to refer to almost any untrained artist. “It is not enough to be untrained, clumsy or naïve. Outsider Art is virtually synonymous with Art Brut in both spirit and meaning, to that rarity of art produced by those who do not know its name.”

  • Art Brut: literally translated from French means “raw art”; ‘Raw’ in that it has not been through the ‘cooking’ process: the art world of art schools, galleries, museums. Originally art by psychotic individuals who existed almost completely outside culture and society. Strictly speaking it refers only to the Collection de l’Art Brut.
  • Folk art: Folk art originally suggested crafts and decorative skills associated with peasant communities in Europe – though presumably it could equally apply to any indigenous culture. It has broadened to include any product of practical craftsmanship and decorative skill – everything from chain-saw animals to hub-cap buildings. A key distinction between folk and outsider art is that folk art typically embodies traditional forms and social values, where outsider art stands in some marginal relationship to society’s mainstream.
  • Intuitive art / Visionary art: Raw Vision Magazine’s preferred general terms for outsider art. It describes them as deliberate umbrella terms. However, Visionary Art unlike other definitions here can often refer to the subject matter of the works, which includes images of a spiritual or religious nature. Intuitive art is probably the most general term available. Intuit: The Center for Intuitive and Outsider Art based in Chicago operates a museum dedicated to the study and exhibition of intuitive and outsider art. The American Visionary Art Museum in Baltimore, Maryland is dedicated to the collection and display of visionary art.
  • Marginal art/Art singulier: Essentially the same as Neue Invention; refers to artists on the margins of the art world.
  • Naïve art: Another term commonly applied to untrained artists who aspire to “normal” artistic status, i.e. they have a much more conscious interaction with the mainstream art world than do outsider artists.
  • Neuve Invention: Used to describe artists who, although marginal, have some interaction with mainstream culture. They may be doing art part-time for instance. The expression was coined by Dubuffet too; strictly speaking it refers only to a special part of the Collection de l’Art Brut.
  • Visionary environments: Buildings and sculpture parks built by visionary artists – range from decorated houses, to large areas incorporating a large number of individual sculptures with a tightly associated theme. Examples include Watts Towers by Simon Rodia, Buddha Park and Sala Keoku by Bunleua Sulilat, and The Palais Ideal by Ferdinand Cheval.

Notable outsider artists

  • Nek Chand (b. 1924) is an Indian artist, famous for building the Rock Garden of Chandigarh, a forty acre (160,000 m2) sculpture garden in the city of Chandigarh, India.
  • Ferdinand Cheval (1836–1924) was a country postman in Hauterives, south of Lyon, France. Motivated by a dream, he spent 33 years constructing the Palais Ideal. Half organic building, half massive sculpture, it was constructed from stones collected on his postal round, held together with chicken wire, cement, and lime.
  • Henry Darger (1892–1973) was a solitary man who was orphaned and institutionalised as a child. In the privacy of his Chicago apartment, he produced 15,000 pages of text and hundreds of large scale illustrations, including maps, collaged photos and watercolors that depict his child heroes “the Vivian Girls” in the midst of battle scenes that combine imagery of the US Civil War with fanciful monsters.[1] – link to Henry Darger Room study collection
  • Francis E. Dec (1926–1996) was a U.S. lawyer disbarred in 1961 after what he claimed was a conspiracy and who spent the next thirty years of his life in isolation mailing increasingly paranoid rants to the media. His outlandish worldview and unique writing style made his rants become cult items circulated as involuntary humour and underground poetry.
  • Howard Finster (1916–2001), a self-taught artist, was a preacher from Summerville, Georgia who claimed to be inspired by God to spread the gospel through the built environment of Paradise Garden, his masterpiece, and over 46,000 pieces of art.
  • Madge Gill (1882–1961), was an English mediumistic artist who made thousands of drawings “guided” by a spirit she called “Myrninerest” (my inner rest).
  • Paul Gosch (1885–1940), a schizophrenic German artist and architect murdered by the Nazis in their euthanasia campaign.
  • Alexander Lobanov (1924–2003) was a deaf and autistically withdrawn Russian known for detailed and self-aggrandizing self-portraits: paintings, photographs and quilts, which usually include images of large guns.
  • Helen Martins (1897–1976) transformed the house she inherited from her parents in Nieu-Bethesda, South Africa, into a fantastical environment decorated with crushed glass and cement sculptures. The house is known as The Owl House.
  • Tarcisio Merati (1934–1995), an Italian artist, was confined to a psychiatric hospital for most of his adult life during which time he produced a vast amount of drawings (several dream toys, bird on nest etc) , text and musical composition.
  • Martin Ramirez (1895–1963), a Mexican outsider artist who spent most of his adult life institutionalized in a California mental hospital (he had been diagnosed as paranoid schizophrenic). He developed an elaborate iconography featuring repeating shapes mixed with images of trains and Mexican folk figures.
  • Achilles Rizzoli (1896–1981) was employed as an architectural draftsman. He lived with his mother near San Francisco, California. After his death, a huge collection of elaborate drawings were discovered, many in the form of maps and architectural renderings that described a highly personal fantasy exposition, including portraits of his mother as a neo-baroque building.
  • Judith Scott (1943–2005) was born deaf and with Down Syndrome. After taking a fiber art class at an art institute for the disabled, she began to produce objects wrapped in many layers of string and fibers.
  • Bunleua Sulilat (1932–1996) was a Thai/Lao myth-maker and informal religious leader who organized large groups of unskilled volunteers for the construction of two religious-themed parks featuring giant fantastic concrete sculptures.
  • Miroslav Tichý (b. 1926) wandered the small Moravian town of Kyjov in rags, pursuing his obsession with the female form by secretly photographing women in the streets, shops and parks with cameras he made from tin cans, children’s spectacle lenses and other junk he found on the street. He would return home each day to make prints on equally primitive equipment, making only one print from the negatives he selected. His work remained largely unknown until 2005, when he was 79 years old.
  • Adolf Wölfli (1864–1930), a Swiss artist, was confined to a psychiatric hospital for most of his adult life during which time he produced a vast amount of drawings, text and musical composition. Wölfli was the first well-known “outsider artist,” and he remains closely associated with the label.
  • Kiyoshi Yamashita (1922–1971) was a Japanese graphic artist who spent much of his life wandering as a vagabond through Japan. He has been considered an autistic savant.
  • Scotti Wilson (1928–1972) (born Louis Freeman), emigrated from Scotland to the United States and opened a second-hand clothes store, found fame when his casual doodlings were noted for their dream-like character.
  • Bill Traylor (1854–1949). Better characterized as “self-taught” than “outsider,” Traylor was born into slavery in Alabama. Unable to read or write, he first began drawing in 1939 at the age of eighty-three. He worked full-time for the next four years to produce over eighteen hundred drawings. He used a straight edge to create geometric silhouettes of human and animal figures which he then filled in with crayon and tempera. He is known for his intriguing use of pattern versus flat color and a remarkably intuitive sense of space.

Posted in Art, Artists, Culture, Society | Tagged: , , , , , , , , | 2 Comments »

Profile: Mexico City

Posted by jase on July 26, 2009

Mexico City (Spanish: Ciudad de México, D.F. (for Distrito Federal), México or Méjico is the capital city of Mexico. It is the economic, industrial, and cultural center in the country, and the most populous city, with about 8,836,045 inhabitants in 2008. Greater Mexico City (Zona Metropolitana del Valle de México) incorporates 59 adjacent municipalities of Mexico State and 29 municipalities of the state of Hidalgo, according to the most recent definition agreed upon by the federal and state governments. Greater Mexico City has a population exceeding 19 million people, making it the second largest metropolitan area in the western hemisphere and the third largest in the world by population according to the United Nations. In 2005, it ranked the eighth in terms of GDP (PPP) among urban agglomerations in the world. Mexico City is a major global city in Latin America and ranked 25th among global cities by Foreign Policy’s 2008 Global Cities Index.

Mexico City is also the Federal District (Distrito Federal). The Federal District is coterminous with Mexico City; both are governed by a single institution and are constitutionally considered to be the same entity. This has not always been the case. The Federal District, created in 1824, was integrated by several municipalities, one of which was the municipality of Mexico City. As the city began to grow, it engulfed all other municipalities into one large urban area. In 1928, all municipalities within the Federal District were abolished, an action that left a vacuum in the legal status of Mexico City vis-à-vis the Federal District, even though for most practical purposes they were traditionally considered to be the same entity. In 1993, to end the sterile discussions about whether one concept had engulfed the other, or if any of the two entities had any existence in lieu of the other, the 44th Article of the Constitution of Mexico was reformed to clearly state that Mexico City is the Federal District, seat of the Powers of the Union and capital of the United Mexican States.

According to a study conducted by PricewaterhouseCoopers, Greater Mexico City, with a population of 19.2 million, had a GDP of $315 billion in 2005 at purchasing power parity, an urban agglomeration with the eighth highest GDP in the world after the greater areas of Tokyo, New York, Los Angeles, Chicago, Paris, London and Osaka/Kobe, and the highest in Latin America. In 2020, it is expected to rank seventh with a $608 billion GDP, displacing Osaka/Kobe.

As of 2008, the city had a GDP of about $221 billion, with an income per capita of $25,258, well above the national average and on par with high income economies such as South Korea or the Czech Republic.

Mexico City is located in the Valley of Mexico, also called the Valley of Anáhuac, a large valley in the high plateaus at the center of Mexico, at an altitude of 2,240 meters (7,349 ft). The city was originally built as Tenochtitlan by the Aztecs in 1325 on an island of Lake Texcoco. It was almost completely destroyed in the siege of 1521, and was subsequently redesigned and rebuilt in accordance with the Spanish urban standards. In 1524 the municipality of Mexico City was established, known as México Tenustitlán, and as of 1585 it is officially known as ciudad de México.

History

Spanish conquest of Tenochtitlán

After landing in Veracruz, Hernán Cortés heard about the great city and the long-standing rivalries and grievances against it. Although Cortés came to Mexico with a very small army, he was able to persuade many of the other native peoples to help him destroy Tenochtitlán.

Cortés first saw Tenochtitlán on 8 November 1519. Upon viewing it for the first time, Cortés and his men were stunned by its beauty and size. The Spaniards marched along the causeway leading into the city from Iztapalapa. Although Montezuma came out from the center of Tenochtitlán to greet them and exchange gifts, the camaraderie did not last long. Cortés put Montezuma under house arrest, hoping to rule through him. Tensions increased until, on the night of June 30, 1520 – during a struggle commonly known as “La Noche Triste” – the Aztec revolted against the Spanish intrusion and managed to capture or drive out the Europeans and their Tlaxcalan allies. Cortés regrouped at Tlaxcala. The Aztecs thought the Spaniards were permanently gone. They elected a new king, Cuauhtémoc. Cortés decided to lay siege to Tenochtitlán in May 1521. For three months, the city suffered from the lack of food and water as well as the spread of smallpox brought by the Europeans. Cortés and his allies landed their forces in the south of the island and fought their way through the city, street by street, and house by house. Finally, Cuauhtémoc had to surrender in August 1521.

The Spaniards practically razed Tenochtitlán. Cortés first settled in Coyoacan, but decided to rebuild the Aztec site in order to erase all traces of the old order. Cortés did not establish an independent, conquered territory under his own personal rule, but remained loyal to the Spanish crown. The first viceroy of the new domain arrived in Mexico City fourteen years later. By that time, the city had again become a city-state, having power that extended far beyond the city’s established borders. Although the Spanish preserved Tenochtitlán’s basic layout, they built Catholic churches over the old Aztec temples and claimed the imperial palaces for themselves. Tenochtitlán was renamed “Mixico”, its alternative form name, as the Spanish found this easier to say.

20th Century and Beyond

The history of the rest of the 20th century to the present focuses on the phenomenal growth of the city and its environmental and political consequences. In 1900, the population of Mexico City was about 500,000. The city began to grow rapidly westward in the early part of the 20th century. and then began to grow upwards in the 1950s, with the Torre Latinoamericana as the first skyscraper. The 1968 Olympic Games brought about the construction of large sporting facilities. In 1969, the Metro system was inaugurated. Explosive growth in the population of the city started from the 1960s, with the population overflowing the boundaries of the Federal District into the neighboring state of Mexico, especially to the north, northwest and northeast. Between 1960 and 1980 the city’s population more than doubled to 8,831,079. 1980 – half of all the industrial jobs in Mexico were located in Mexico City. Under relentless growth, the Mexico City government could barely keep up with services. Villagers from the countryside who continued to pour into the city to escape poverty only compounded the city’s problems. With no housing available, they took over lands surrounding the city, creating huge shantytowns that extended for many miles. This caused serious air and water pollution problems, as well as a sinking city due to overextraction of groundwater. Air and water pollution has been contained and improved in some several areas due to government programs, the renovation of vehicles and the modernization of the public transport.

The autocratic government that ruled Mexico City since the Revolution was tolerated, mostly because of the continued economic expansion since World War II. This was the case even though this government could not handle the population and pollution problems adequately. Nevertheless, discontent and protests began in the 1960s leading to the massacre of an unknown number of protesting students in Tlatelolco.

However, the last straw may have been the 1985 Mexico City earthquake. On Thursday, 19 September 1985, at 7:19 am local time, Mexico City was struck by an earthquake of magnitude 8.1 on the Richter scale. While this earthquake was not as deadly or destructive as many similar events in Asia and other parts of Latin America it proved to be a disaster politically for the one-party government. The government was paralyzed by its own bureaucracy and corruption, forcing ordinary citizens to not only create and direct their own rescue efforts but efforts to reconstruct much of the housing that was lost as well. This discontent eventually led to Cuauhtémoc Cárdenas, a member of the Party of the Democratic Revolution, becoming the first elected mayor of Mexico City in 1997. Cárdenas promised a more democratic government, and his party claimed some victories against crime, pollution, and other major problems. He resigned in 1999 to run for the presidency.

Geography & Climate

Mexico City is located in the Valley of Mexico, sometimes called the Basin of Mexico. This valley is located in the Trans-Mexican Volcanic Belt located in the high plateaus of central Mexico.  

Mexico City has a temperate highland climate (Koppen Cwb), due to its tropical location and high elevation. The lower region of the valley receives less rainfall than the upper regions of the south; the lower boroughs of Iztapalapa, Iztacalco, Venustiano Carranza and the west portion of Gustavo A. Madero are usually drier and warmer than the upper southern boroughs of Tlalpan and Milpa Alta, a mountainous region of pine and oak trees known as the range of Ajusco.

The average annual temperature varies from 12 to 16°C (53 to 60°F), depending on the altitude of the borough. Lowest temperatures, usually registered during January and February, may reach -2 to -5°C (28 to 23°F), usually accompanied by snow showers on the southern regions of Ajusco, and the maximum temperatures of late spring and summer may reach up to 32°C (92°F). Overall precipitation is heavily concentrated in the summer months, including dense hail. The central valley of Mexico rarely gets precipitation in the form of snow during winter; the two last recorded instances of such an event were on March 5, 1940 and January 12, 1967.

The region of the Valley of Mexico receives anti-cyclonic systems, whose weak winds do not allow for the dispersion, outside the basin, of the air pollutants which are produced by the 50,000 industries and 4 million vehicles operating in or around the metropolitan area.

The area receives about 700 millimeters of annual rainfall, which is concentrated from June through September/October with little or no precipitation the remainder of the year. The area has two main seasons. The rainy season runs from June to October when winds bring in tropical moisture from the sea. The dry season runs from November to May, when the air is relatively drier. This dry season subdivides into a cold period from November to February when polar air masses pushing down from the north keep the air fairly dry and a warm period from March to May when tropical winds again dominate but they do not yet carry enough moisture for rain.

Demographics

Historically, and since pre-Hispanic times, the valley of Anáhuac has been one of the most densely populated areas in Mexico. When the Federal District was created in 1824, the urban area of Mexico City extended approximately to the area of today’s Cuauhtémoc borough. At the beginning of the twentieth century, the elites began migrating to the south and west and soon the small towns of Mixcoac and San Ángel were incorporated by the growing conurbation. According to the 1921 census, 30.79% of the population was White, 54.78% was Mestizo, 11.74% was Indigenous and 2.69% other races (mostly Mulattoes, Blacks and some Cantonese Chinese Immigrants) . Today the city could be clearly divided into a middle and high-class area (south and west, including Polanco, Chapultepec and Santa Fe), and a lower class area to the east (Ciudad Nezahualcóyotl, Pantitlán, Chalco and Moctezuma).

Up to the 1980s, the Federal District was the most populated federal entity in Mexico, but since then its population has remained stable at around 8.7 million. The growth of the city has extended beyond the limits of the Federal District to 59 municipalities of the state of Mexico and 1 in the state of Hidalgo. With a population of approximately 19.8 million inhabitants (2008), it is one of the most populated conurbations in the world. Nonetheless, the annual rate of growth of the Metropolitan Area of Mexico City is much lower than that of other large urban agglomerations in Mexico, a phenomenon most likely attributable to the environmental policy of decentralization. The net migration rate of the Federal District from 1995 to 2000 was negative.

While they represent around 1.3% of the city’s population, indigenous peoples from different regions of Mexico have immigrated to the capital in search of better economic opportunities. Náhuatl, Otomí, Mixteco, Zapoteco, and Mazahua are the indigenous languages with the greatest number of speakers in Mexico City.

On the other hand, Mexico City is home to large communities of expatriates, most notably from South America (mainly from Argentina, but also from Chile, Uruguay, Colombia, Brazil and Venezuela), from Europe (mainly from Spain and Germany, but also from France, Italy, Turkey, Poland and Romania), the Middle East (mainly from Lebanon and Syria), and recently from Asia (mainly from China and South Korea). While no official figures have been reported, population estimates of each of these communities are quite significant. Mexico City is home to the largest population of U.S. Americans living outside the United States. Some estimates are as high as 600,000 U.S. Americans living in Mexico City, while in 1999 the U.S. Bureau of Consular Affairs estimates over 440,000 Americans lived in the Mexico City Metropolitan Area.

The majority (90.5%) of the residents in Mexico City are Roman Catholic, higher than the national percentage, even though it has been decreasing over the last decades. However, many other religions and philosophies are also practiced in the city: many different types of Protestant groups, different types of Jewish communities, Buddhist and other philosophical groups, as well as atheism.

  • 1950 – 3 million people lived in Mexico City.
  • 1975 – 12 million people lived in Mexico City.
  • 2000 – 22 million people lived in Mexico City.

Nicknames

Mexico City was traditionally known as La Ciudad de los Palacios (“the City of the Palaces”), a nickname attributed to Baron Alexander von Humboldt when visiting the city in the 19th century who sending a letter back to Europe said Mexico city could rival any major city in Europe.

During López Obrador’s administration a political slogan was introduced: la Ciudad de la Esperanza (“The City of Hope”). This slogan was quickly adopted as a nickname to the city under López Obrador’s term, although it has lost popularity since the new slogan Capital en Movimiento (“Capital in Movement”) was adopted by the recently elected administration headed by Marcelo Ebrard Casaubon; the latter is not treated as a nickname in media.

The city is colloquially known as Chilangolandia after the locals’ nickname chilangos, which is used either as a pejorative term by people living outside Mexico City or as a proud adjective by Mexico City’s dwellers.

Residents of Mexico City are more formally called capitalinos (in reference to the city being the capital of the country) or, more recently defeños (a word which derives from the postal abbreviation of the Federal District in Spanish: D.F., which is read “De-Efe”.)

Posted in Countries, Culture, Globalization, Mexico, Profiles, Society, Urbanism | Tagged: , , , , , , , , , | 1 Comment »

7 of History’s Most Infamous Curses

Posted by jase on July 25, 2009

1. James Dean and “Little Bastard”

On September 30, 1955, James Dean was killed when the silver Porsche 550 Spyder he called “Little Bastard” was struck by an oncoming vehicle. Within a year or so of Dean’s crash, the car was involved in two more fatal accidents and caused injury to at least six other people. After the accident, the car was purchased by hot-rod designer George Barris.

While getting a tune up, Little Bastard fell on the mechanic’s legs and crushed them. Barris later sold the engine and transmission to two doctors who raced cars. While racing against each other, one driver was killed, the other seriously injured. Someone else had purchased the tires, which blew simultaneously, sending the driver to the hospital.

Little Bastard was set to appear in a car show, but a fire broke out in the building the night before the show, destroying every car except Little Bastard, which survived without so much as a smudge. The car was then loaded onto a truck to go back to Salinas, California. The driver lost control en route, was thrown from the cab, and was crushed by the car when it fell off the trailer. In 1960, after being exhibited by the California Highway Patrol, Little Bastard disappeared and hasn’t been seen since.

2. The Curse of Tutankhamen’s Tomb

In 1922, English explorer Howard Carter, leading an expedition funded by George Herbert, Fifth Earl of Carnarvon, discovered the ancient Egyptian king’s tomb and the riches inside. After opening the tomb, however, strange and unpleasant events began to take place in the lives of those involved in the expedition.

Lord Carnarvon’s story is the most bizarre. The adventurer apparently died from pneumonia and blood poisoning following complications from a mosquito bite. Allegedly, at the exact moment Carnarvon passed away in Cairo, all the lights in the
city mysteriously went out. Carnarvon’s dog dropped dead that morning, too. Some point to the foreboding inscription, “Death comes on wings to he who enters the tomb of a pharaoh” as proof that King Tut put a curse on anyone who disturbed his final resting place.

3. “The Club”

If you’re a rock star and you’re about to turn 27, you might want to consider taking a year off to avoid membership in “The Club.” Robert Johnson, an African-American musician, who Eric Clapton called “the most important blues musician who ever lived,” played the guitar so well that some said he must have made a deal with the devil. So when he died at 27, folks said it must have been time to pay up.

Since Johnson, a host of musical geniuses have gone to an early grave at age 27. Brian Jones, founding member of the Rolling Stones, died at age 27 in 1969. Then it was both Jimi Hendrix and Janis Joplin in 1970 and Jim Morrison the following year. Kurt Cobain joined “The Club” in 1994. All 27 years old. Coincidence? Or were these musical geniuses paying debts, too?

4. “Da Billy Goat” Curse

In 1945, William “Billy Goat” Sianis brought his pet goat, Murphy, to Wrigley Field to see the fourth game of the 1945 World Series between the Chicago Cubs and the Detroit Tigers. Sianis and his goat were later ejected from the game, and Sianis reportedly put a curse on the team that day. Ever since, the Cubs have had legendarily bad luck.

Over the years, Cubs fans have experienced agony in repeated late-season collapses when victory seemed imminent. In 1969, 1984, 1989, and 2003, the Cubs were painfully close to advancing to the World Series but couldn’t hold the lead. Even those who don’t consider themselves Cubs fans blame the hex for the weird and almost comical losses year after year. The Cubs have not won a World Series since 1908 — no other team in the history of the game has gone as long without a championship.

5. Rasputin and the Romanovs

Rasputin, the self-proclaimed magician and cult leader, wormed his way into the palace of the Romanovs, Russia’s ruling family, around the turn of the last century. After getting a little too big for his britches, a few of the Romanovs allegedly decided to have him killed. But he was exceptionally resilient.

Reportedly it took poison, falling down a staircase, and repeated gunshots before Rasputin was finally dead. It’s said that Rasputin mumbled a curse from his deathbed, assuring Russia’s ruling monarchs that they would all be dead within a year. That did come to pass, as the Romanov family was brutally murdered in a mass execution less than a year later.

6. Tecumseh and the American Presidents

The curse of Tippecanoe, or “Tecumseh’s Curse,” is a widely held explanation of the fact that from 1840 to 1960, every U.S. president elected (or reelected) every twentieth year has died in office. Popular belief is that Tecumseh administered the curse when William Henry Harrison’s troops defeated the Native American leader and his forces at the Battle of Tippecanoe. Check it out:

  • William Henry Harrison was elected president in 1840. He caught a cold during his inauguration, which quickly turned into pneumonia. He died April 4, 1841, after only one month in office.
  • Abraham Lincoln was elected president in 1860 and reelected four years later. Lincoln was assassinated and died April 15, 1865.
  • James Garfield was elected president in 1880. Charles Guiteau shot him in July 1881. Garfield died several months later, from complications following the gunshot wound.
  • William McKinley was elected president in 1896 and reelected in 1900. On September 6, 1901, McKinley was shot by Leon F. Czolgosz, who considered the president an “enemy of the people.” McKinley died eight days later.
  • Three years after Warren G. Harding was elected president in 1920, he died suddenly of either a heart attack or stroke while traveling in San Francisco.
  • Franklin D. Roosevelt was elected president in 1932 and reelected in 1936, 1940, and 1944. His health wasn’t great, but he died rather suddenly in 1945, of a cerebral hemorrhage or stroke.
  • John F. Kennedy was elected president in 1960 and assassinated in Dallas three years later.
  • Ronald Reagan was elected president in 1980, and though he was shot by an assassin in 1981, he did survive. Some say this broke the curse, which should make George W. Bush happy. At the time of this writing, Bush, who was elected in 2000, is serving his second term in office.

7. The Curse of the Kennedy Family

Okay, so maybe if this family had stayed out of politics and off airplanes, their fate might be different. Regardless, the number of Kennedy family tragedies have led some to believe there must be a curse on the whole bunch. You decide:

  • JFK’s brother Joseph, Jr., and sister Kathleen both died in separate plane crashes in 1944 and 1948, respectively.
  • JFK’s other sister, Rosemary, was institutionalized in a mental hospital for years.
  • John F. Kennedy himself, America’s 35th president, was assassinated in 1963 at age 46.
  • Robert Kennedy, JFK’s younger brother, was assassinated in 1968.
  • Senator Ted Kennedy, JFK’s youngest brother, survived a plane crash in 1964. In 1969, he was driving a car that went off a bridge, causing the death of his companion, Mary Jo Kopechne. His presidential goals were pretty much squashed after that.
  • In 1984, Robert Kennedy’s son David died of a drug overdose. Another son, Michael, died in a skiing accident in 1997.
  • In 1999, JFK, Jr., his wife, and his sister-in-law died when the small plane he was piloting crashed into the Atlantic Ocean.

Source: How Stuff Works

Posted in Culture, Curses, Historic Events, Historic Figures, History, Native American, Society | Tagged: , , , , , , | Leave a Comment »

Profile: Quanah Parker

Posted by jase on July 19, 2009

Quanah Parker (c. late 1840s – February 23, 1911) was a Native American Indian leader, the son of Comanche chief Peta Nocona and European American woman Cynthia Ann Parker, and the last chief of the Quahadi Comanche Indians.

Quanah Parker’s mother, Cynthia Ann Parker {b.ca 1827}, was a member of the large Parker frontier family that settled in east Texas in the 1830s. She was captured in 1836 by Comanches during the raid of Fort Parker near present-day Groesbeck, Texas. She was given the Indian name Nadua (“Someone Found”), and adopted into the Nocona band of Comanches. Cynthia Ann eventually married the Comanche warrior Noconie, (aka Tah-con-ne-ah-pe-ah) (called Peta Nocona by the whites), who was a Mexican captive. Quanah was her firstborn son. She also had another son, Pecos (“Pecan”) and a daughter, Topsana (“Prairie Flower)” In 1860, Cynthia Ann Parker was recaptured in the battle of Pease River by Texas Rangers under Lawrence Sullivan Ross. Peta Nocona, Quanah, and most of the other men were out hunting when Ross’ men attacked. Returning to find the aftermath, they found it difficult to get any information as only a few people were still alive. Meanwhile, Cynthia Ann was reunited with her white family, but years with the Comanches had made her a different person. She frequently demanded to return to her husband, but was never permitted to do so. After Topsana died of an illness in 1863, Cynthia Ann starved herself to death in 1870.

In October, 1867, Quanah was among the Comanche chiefs at Medicine Lodge. Though he did not give a speech – his place was as an observer – he did make a statement about not signing the Medicine Lodge Treaty. His band remained free while other Comanches signed.

In the early 1870s, the plains Indians were losing the battle for their land. Following the capture of the Kiowa chiefs Satank, Adoeet (Big Tree), and Satanta, the Kiowa, Comanche, and Cheyenne tribes joined forces in several battles. Colonel Ranald Mackenzie was sent to eradicate all remaining Indians who had not settled on reservations.

In 1874, while in the Texas panhandle, a Comanche prophet named Isatai summoned the tribes to Second Battle of Adobe Walls, where several buffalo hunters were active. With Kiowa Chief Big Bow, Quanah was in charge of one group of warriors. The incident was his closest brush with death; he was shot twice.

With their food source depleted, and under constant pressure from the army, the Quahadi Comanches finally surrendered and in 1875 moved to a reservation in southwestern Oklahoma. His home in Cache, Oklahoma was called the Star House. Parker’s was the last tribe of the Staked Plains or Llano Estacado to come to the reservation. Quanah was named chief over all the Comanches on the reservation, and proved to be a forceful, resourceful and able leader. Through wise investments, he became perhaps the wealthiest American Indian of his day in the United States. Quanah embraced much of white culture, and was well respected by the whites. He went on hunting trips with President Theodore Roosevelt. Nevertheless, he rejected both monogamy and traditional Protestant Christianity in favor of the Native American Church Movement. He had five wives and twenty five children and founded the Native American Church. One of his sons, White Parker, later became a Methodist minister.

Author Bill Neeley writes:

“Not only did Quanah pass within the span of a single lifetime from a Stone Age warrior to a statesman in the age of the Industrial Revolution,but he never lost a battle to the white man and he also accepted the challenge and responsibility of leading the whole Comanche tribe on the difficult road toward their new existence.”

Quanah died on February 23, 1911. He is buried at the Fort Sill Cemetery, beside his mother and sister. The inscription on his tombstone reads:

Resting Here Until Day Breaks
And Shadows Fall and Darkness
Disappears is
Quanah Parker Last Chief of the Comanches
Born 1852
Died Feb. 23, 1911 Quanah Parker is credited as one of the first big leaders of the Native American Church Movement. Parker adopted the peyote religion after being gored in southern Texas by a bull. Parker was visiting his mother’s brother, John Parker, in Texas where he was attacked, giving him severe wounds. To fight an onset of blood burning fever, a Mexican curandera was summoned and she prepared a strong peyote tea from fresh peyote to heal him. It was from this incident on that Quanah Parker became involved with peyote . Peyote is reported to contain hordenine and tyramine, phenylethylamine alkaloids which act as potent natural antibiotics when taken in a combined form.

Parker taught that the sacred peyote medicine was the sacrament given to the Indian Peoples , and was to be used with water when taking communion in a traditional Native American Church medicine ceremony. Parker was a proponent of the “half-moon” style of the peyote ceremony. The “cross” ceremony later evolved in Oklahoma due to Caddo influences introduced by John Wilson, a Caddo-Delaware Indian who traveled extensively around the same time as Parker during the early days of the Native American Church movement. The Native American Church was the first truly “American” religion based on Christianity outside of the Latter Day Saints.

Parker’s most famous teaching regarding the Spirituality of the Native American Church:

The White Man goes into his church and talks about Jesus. The Indian goes into his Tipi and talks with Jesus.

The modern reservation era in Native American History began with the universal adoption of the Native American Church and Christianity by virtually every Native American Tribe and Culture within North American and Canada as a result of Parker and Wilson’s efforts. The Peyote religion and the Native American Church, however, was never the traditional religious practice of North American Indian Cultures. This religion was driven by Parker’s leadership and was driven by influences from Mexico and other Southern Tribes who have used peyote since ancient times. Under Parker’s leadership, peyote became an important item of trade, and this, combined with his Church movement and political and financial contacts, garnered Parker enormous wealth during his lifetime.

Criticism

Although praised by many in his tribe as a preserver of their culture, Quanah had critics within the Comanche community. Many claimed that he “sold out to the white man” with his rancher persona in later life, dressing and living in a more American than Comanche style. Quanah did adopt more white ways than most other Comanche of his time, but he always wore his hair long and in braids. He also refused to follow white American marriage customs, which would have required him to cast aside four of his five wives.

Another point of controversy among the Comanche was that Quanah was never elected chief of the entire tribe by the people themselves. Traditionally, the Comanche had no single chief. The various bands of the Comanche had their own chiefs, with no single figure standing for the entire people. But that, as many other things, changed with the reservation times.

Family

Quanah’s grandfather was the Chief Iron Jacket, famous among the Comanches as a powerful chief who wore a Spanish coat of mail and was said to have the power to blow bullets away with his breath.

Quanah’s first wife was Weakeah, daughter of Comanche chief Yellow Bear. Originally, she was espoused to another warrior. Quanah and Weakeah eloped, and took several other warriors with them. It was from this small group that the large Quahadi band would form. Yellow Bear pursued the band and eventually Quanah made peace with him, and the two bands united, forming the largest force of Comanche Indians.

Over the years, Quanah accumulated four more wives. He had twenty-five children. Many north Texans and south Oklahomans claim descent from Quanah. It had been said that more Comanches are related to Quanah than any other chief. One grandson became Comanche chairman, the modern “Chief” of the tribe.

After moving to the reservation, Quanah first got in touch with his white relatives. He stayed for a few weeks with them, where he studied English and western culture, and learned white farming techniques.

In his later years, Quanah carried on a correspondence by letter with Texas cattleman Charles Goodnight. Though Goodnight was illiterate, he dictated the letters to his wife, who in turn sent them to Quanah.

Posted in Culture, Historic Figures, History, Men, Native American, Profiles | Tagged: , , , , , , , , , , | 2 Comments »

Profile: Omar Khayyam

Posted by jase on July 5, 2009

Omar Khayyam (Persian: عمر خیام), (born 1048 AD, Neyshapur, Iran—1123 AD, Neyshapur, Iran), was a Persian polymath, mathematician, philosopher, astronomer and poet.

He has also become established as one of the major mathematicians and astronomers of the medieval period. Recognized as the author of the most important treatise on algebra before modern times as reflected in his Treatise on Demonstration of Problems of Algebra giving a geometric method for solving cubic equations by intersecting a hyperbola with a circle. He also contributed to calendar reform and may have proposed a heliocentric theory well before Copernicus.

His significance as a philosopher and teacher, and his few remaining philosophical works, have not received the same attention as his scientific and poetic writings. Zamakhshari referred to him as “the philosopher of the world”. Many sources have also testified that he taught for decades the philosophy of Ibn Sina in Nishapur where Khayyam lived most of his life, breathed his last, and was buried and where his mausoleum remains today a masterpiece of Iranian architecture visited by many people every year.

Outside Iran and Persian speaking countries, Khayyam has had impact on literature and societies through translation and works of scholars. The greatest such impact was in English-speaking countries; the English scholar Thomas Hyde (1636–1703) was the first non-Persian to study him. However the most influential of all was Edward FitzGerald (1809–83)[4] who made Khayyam the most famous poet of the East in the West through his celebrated translation and adaptations of Khayyam’s rather small number of quatrains (rubaiyaas) in Rubaiyat of Omar Khayyam.

Omar Khayyam was famous during his times as a mathematician. He wrote the influential Treatise on Demonstration of Problems of Algebra (1070), which laid down the principles of algebra, part of the body of Persian Mathematics that was eventually transmitted to Europe.

Like most Persian mathematicians of the period, Omar Khayyám was also famous as an astronomer.  Omar Khayyam was part of a panel that introduced several reforms to the Persian calendar, largely based on ideas from the Hindu calendar. On March 15, 1079, Sultan Malik Shah I accepted this corrected calendar as the official Persian calendar.  Omar Khayyám also built a star map (now lost), which was famous in the Persian and Islamic world.

It is said that Omar Khayyam also estimated and proved to an audience that included the then-prestigious and most respected scholar Imam Ghazali, that the universe is not moving around earth as was believed by all at that time. By constructing a revolving platform and simple arrangement of the star charts lit by candles around the circular walls of the room, he demonstrated that earth revolves on its axis, bringing into view different constellations throughout the night and day (completing a one-day cycle). He also elaborated that stars are stationary objects in space which, if moving around earth, would have been burnt to cinders due to their large mass. Some of these ideas may have been transmitted to Western science after the Renaissance.

Omar Khayyám’s poetic work has eclipsed his fame as a mathematician and scientist.

He is believed to have written about a thousand four-line verses or quatrains (rubaai’s). In the English-speaking world, he was introduced through the Rubáiyát of Omar Khayyám which are rather free-wheeling English translations by Edward FitzGerald (1809-1883).

Other translations of parts of the rubáiyát (rubáiyát meaning “quatrains”) exist, but FitzGerald’s are the most well known. Translations also exist in languages other than English.

Ironically, FitzGerald’s translations reintroduced Khayyam to Iranians “who had long ignored the Neishapouri poet.” A 1934 book by one of Iran’s most prominent writers, Sadeq Hedayat, Songs of Khayyam, (Taranehha-ye Khayyam) is said have “shaped the way a generation of Iranians viewed” the poet.

Omar Khayyam’s personal beliefs are not known with certainty, but much is discernible from his poetic oeuvre.

Despite strong Islamic training, it is clear that Omar Khayyam himself was undevout and had no sympathy with popular religion, but the verse: “Enjoy wine and women and don’t be afraid, God has compassion,” suggests that he wasn’t an atheist. Some religious Iranians have argued that Khayyam’s references to intoxication in the Rubaiyat were actually the intoxication of the religious worshiper with his Divine Beloved – a Sufi conceit. This however, is reportedly a minority opinion dismissed as wishful pious thinking by most Iranians.

It is almost certain that Khayyám objected to the notion that every particular event and phenomenon was the result of divine intervention. Nor did he believe in an afterlife with a Judgment Day or rewards and punishments. Instead, he supported the view that laws of nature explained all phenomena of observed life. One hostile orthodox account of him shows him as “versed in all the wisdom of the Greeks” and as insistent that studying science on Greek lines is necessary. He came into conflict with religious officials several times, and had to explain his views on Islam on multiple occasions; there is even one story about a treacherous pupil who tried to bring him into public odium. The contemporary Ibn al Kifti wrote that Omar Khayyam “performed pilgrimages not from piety but from fear” of his contemporaries who divined his unbelief.

Khayyám’s disdain of Islam in general and its various aspects such as eschatology, Islamic taboos and divine revelation are clearly visible in his writings, particularly the quatrains, which as a rule reflect his intrinsic conclusions describing those who claim to receive God’s word as maggot-minded fanatics.

Khayyam himself rejects to be associated with the title falsafi- (lit. philosopher) in the sense of Aristotelian one and stressed he wishes “to know who I am”. In the context of philosophers he was labeled by some of his contemporaries as “detached from divine blessings”.

Khayyam the philosopher could be understood from two rather distinct sources. One is through his Rubaiyat and the other through his own works in light of the intellectual and social conditions of his time.  The latter could be informed by the evaluations of Khayyam’s works by scholars and philosophers such as Bayhaqi, Nezami Aruzi, and Zamakhshari and also Sufi poets and writers Attar Nishapuri and Najmeddin Razi.

As a mathematician, Khayam has made fundamental contributions to the Philosophy of mathematics especially in the context of Persian Mathematics and Persian philosophy with which most of the other Persian scientists and philosophers such as Avicenna, Biruni, and Tusi are associated. There are at least three basic mathematical ideas of strong philosophical dimensions that can be associated with Khayyam.

Posted in Culture, Historic Figures, Literature, Poetry, Profiles, Religion, Science | Tagged: , , , , , , , | Leave a Comment »

Is An Ugly Baby Harder to Love?

Posted by jase on June 24, 2009

Cher & Eric Stoltz in a scene from Mask

Cher & Eric Stoltz in a scene from 'Mask'

Moms might want to hang on to those Mother’s Day cards they got last month. There may not be much more familial goodwill forthcoming — at least not after kids get wind of a new study released by Harvard-affiliated McLean Hospital and published in the online journal PloS One. Turns out that your mother’s feelings for you may not be the unconditional things you always assumed. It’s possible, researchers say, that the prettier you were when you were born, the more she loved you.

It’s never been a secret that beautiful people get more breaks than everyone else, nor that the bias may start in the nursery. An oft cited — and deeply disturbing — Israeli study once showed that 70% of abused or abandoned children had at least one apparent flaw in their appearance, which otherwise had no impact on their health or educability. McLean psychiatrist Dr. Igor Elman and postdoctoral student Rinah Yamamoto devised a study to explore that phenomenon more closely.

Elman and Yamamoto recruited 27 volunteers — 13 men and 14 women — and sat them at computer screens where they were randomly shown pictures of 50 healthy and attractive babies and 30 others with distinct facial irregularities such as a cleft palate or a skin condition. The volunteers were told that each picture would remain on the screen for four seconds but they could shorten that time by clicking one key or prolong it by clicking another. What the researchers wanted to learn, Elman explains, is how much effort people were willing to exert to look at pictures of pretty babies or avoid pictures of less pretty ones — and, importantly, what that implies.

Much of the answer, they found out, depends on the beholder’s sex. The men in the study were less likely than women to click off photos of unattractive babies — viewing them for the full four seconds — but clicked quite a bit to hold on to the images of the pretty ones. Their reactions were the same whether they had children of their own or not. Women, conversely, left the keyboard alone when they were looking at pretty babies but hurried away from the less attractive ones — with the results again not seeming to be influenced by whether or not they were mothers themselves.

“[Women] pressed the key 2.5 times as much to get rid of those pictures,” Elman says. “That’s highly statistically significant.”

Of all the things driving that response, the most primal one may be evolution. Parents devote a lot of resources to raising a child — food, time, money, love — and those assets are usually in finite supply. All animals, humans included, are hardwired to spend wisely, devoting the most energy to the offspring most likely to yield the highest genetic payoff; healthy, beautiful offspring are the best bet of all. Perhaps women, who still must do the lion’s share of childcare, are naturally more attuned to this trade-off than men are. “In general, men tend to be aesthetically oriented,” Elman says, “so they’ll press a lot to hold the beautiful babies on the screen. Women are more consequence-oriented.”

There are some potential holes in Elman’s work, all of which he acknowledges. For one thing, it’s possible women avoid the unattractive faces not because they’re less sensitive to them but because they’re more sensitive, simply finding the hardships endured by unhealthy babies too difficult to contemplate. Such highly tuned empathy can ultimately make them better caregivers, even if a four-second exposure to the idea is painful. “Everyone will try to get away from a stimulus that feels like a punishment and hold on to one that feels like a reward,” Elman says.

More important, the way people of either gender react to a picture of an anonymous child with physical abnormalities is likely to be radically different from the way they would react if that child were their own — something that is readily evident from all the disabled children on whom parents lavish love. Still, the fact that both parents and nonparents in Elman’s study reacted the same way to the pictures suggests that their responses are deeply ingrained and that they may be hard to mitigate simply by having children of their own.

The gender differences, by the way, don’t let fathers off the hook. Men may not have hurried to get the unattractive faces off the screen, but neither did they linger over them the way they did the attractive faces. In both cases, this suggests bias, and when the rubber hits the road of real childcare, parents of either sex may end up having similar instincts. More clarity should come when Elman conducts the next phase of his work: running the same experiment but hooking the subjects up to brain scans throughout it. This will make it far easier to see just which areas of the brain are activated when viewing the pictures and, by implication, which feelings and motivations are being evoked. Until then, both Mom and Dad — who already have enough to worry about — should probably get the benefit of the doubt.

Posted in Biology, Culture, Infants, Medical, Men, Society | Tagged: , , , , | 1 Comment »

The Coming Evangelical Collapse

Posted by jase on June 22, 2009

I received this interesting article from a friend.  The article appears on the website of The Christian Science Monitor.  Have a look…

We are on the verge – within 10 years – of a major collapse of evangelical Christianity. This breakdown will follow the deterioration of the mainline Protestant world and it will fundamentally alter the religious and cultural environment in the West.

Within two generations, evangelicalism will be a house deserted of half its occupants. (Between 25 and 35 percent of Americans today are Evangelicals.) In the “Protestant” 20th century, Evangelicals flourished. But they will soon be living in a very secular and religiously antagonistic 21st century.

This collapse will herald the arrival of an anti-Christian chapter of the post-Christian West. Intolerance of Christianity will rise to levels many of us have not believed possible in our lifetimes, and public policy will become hostile toward evangelical Christianity, seeing it as the opponent of the common good.

Millions of Evangelicals will quit. Thousands of ministries will end. Christian media will be reduced, if not eliminated. Many Christian schools will go into rapid decline. I’m convinced the grace and mission of God will reach to the ends of the earth. But the end of evangelicalism as we know it is close.

Why is this going to happen?

1. Evangelicals have identified their movement with the culture war and with political conservatism. This will prove to be a very costly mistake. Evangelicals will increasingly be seen as a threat to cultural progress. Public leaders will consider us bad for America, bad for education, bad for children, and bad for society.

The evangelical investment in moral, social, and political issues has depleted our resources and exposed our weaknesses. Being against gay marriage and being rhetorically pro-life will not make up for the fact that massive majorities of Evangelicals can’t articulate the Gospel with any coherence. We fell for the trap of believing in a cause more than a faith.

2. We Evangelicals have failed to pass on to our young people an orthodox form of faith that can take root and survive the secular onslaught. Ironically, the billions of dollars we’ve spent on youth ministers, Christian music, publishing, and media has produced a culture of young Christians who know next to nothing about their own faith except how they feel about it. Our young people have deep beliefs about the culture war, but do not know why they should obey scripture, the essentials of theology, or the experience of spiritual discipline and community. Coming generations of Christians are going to be monumentally ignorant and unprepared for culture-wide pressures.

3. There are three kinds of evangelical churches today: consumer-driven megachurches, dying churches, and new churches whose future is fragile. Denominations will shrink, even vanish, while fewer and fewer evangelical churches will survive and thrive.

4. Despite some very successful developments in the past 25 years, Christian education has not produced a product that can withstand the rising tide of secularism. Evangelicalism has used its educational system primarily to staff its own needs and talk to itself.

5. The confrontation between cultural secularism and the faith at the core of evangelical efforts to “do good” is rapidly approaching. We will soon see that the good Evangelicals want to do will be viewed as bad by so many, and much of that work will not be done. Look for ministries to take on a less and less distinctively Christian face in order to survive.

6. Even in areas where Evangelicals imagine themselves strong (like the Bible Belt), we will find a great inability to pass on to our children a vital evangelical confidence in the Bible and the importance of the faith.

7. The money will dry up.

What will be left?

•Expect evangelicalism to look more like the pragmatic, therapeutic, church-growth oriented megachurches that have defined success. Emphasis will shift from doctrine to relevance, motivation, and personal success – resulting in churches further compromised and weakened in their ability to pass on the faith.

•Two of the beneficiaries will be the Roman Catholic and Orthodox communions. Evangelicals have been entering these churches in recent decades and that trend will continue, with more efforts aimed at the “conversion” of Evangelicals to the Catholic and Orthodox traditions.

•A small band will work hard to rescue the movement from its demise through theological renewal. This is an attractive, innovative, and tireless community with outstanding media, publishing, and leadership development. Nonetheless, I believe the coming evangelical collapse will not result in a second reformation, though it may result in benefits for many churches and the beginnings of new churches.

•The emerging church will largely vanish from the evangelical landscape, becoming part of the small segment of progressive mainline Protestants that remain true to the liberal vision.

•Aggressively evangelistic fundamentalist churches will begin to disappear.

•Charismatic-Pentecostal Christianity will become the majority report in evangelicalism. Can this community withstand heresy, relativism, and confusion? To do so, it must make a priority of biblical authority, responsible leadership, and a reemergence of orthodoxy.

•Evangelicalism needs a “rescue mission” from the world Christian community. It is time for missionaries to come to America from Asia and Africa. Will they come? Will they be able to bring to our culture a more vital form of Christianity?

•Expect a fragmented response to the culture war. Some Evangelicals will work to create their own countercultures, rather than try to change the culture at large. Some will continue to see conservatism and Christianity through one lens and will engage the culture war much as before – a status quo the media will be all too happy to perpetuate. A significant number, however, may give up political engagement for a discipleship of deeper impact.

Is all of this a bad thing?

Evangelicalism doesn’t need a bailout. Much of it needs a funeral. But what about what remains?

Is it a good thing that denominations are going to become largely irrelevant? Only if the networks that replace them are able to marshal resources, training, and vision to the mission field and into the planting and equipping of churches.

Is it a good thing that many marginal believers will depart? Possibly, if churches begin and continue the work of renewing serious church membership. We must change the conversation from the maintenance of traditional churches to developing new and culturally appropriate ones.

The ascendency of Charismatic-Pentecostal-influenced worship around the world can be a major positive for the evangelical movement if reformation can reach those churches and if it is joined with the calling, training, and mentoring of leaders. If American churches come under more of the influence of the movement of the Holy Spirit in Africa and Asia, this will be a good thing.

Will the evangelicalizing of Catholic and Orthodox communions be a good development? One can hope for greater unity and appreciation, but the history of these developments seems to be much more about a renewed vigor to “evangelize” Protestantism in the name of unity.

Will the coming collapse get Evangelicals past the pragmatism and shallowness that has brought about the loss of substance and power? Probably not. The purveyors of the evangelical circus will be in fine form, selling their wares as the promised solution to every church’s problems. I expect the landscape of megachurch vacuity to be around for a very long time.

Will it shake lose the prosperity Gospel from its parasitical place on the evangelical body of Christ? Evidence from similar periods is not encouraging. American Christians seldom seem to be able to separate their theology from an overall idea of personal affluence and success.

The loss of their political clout may impel many Evangelicals to reconsider the wisdom of trying to create a “godly society.” That doesn’t mean they’ll focus solely on saving souls, but the increasing concern will be how to keep secularism out of church, not stop it altogether. The integrity of the church as a countercultural movement with a message of “empire subversion” will increasingly replace a message of cultural and political entitlement.

Despite all of these challenges, it is impossible not to be hopeful. As one commenter has already said, “Christianity loves a crumbling empire.”

We can rejoice that in the ruins, new forms of Christian vitality and ministry will be born. I expect to see a vital and growing house church movement. This cannot help but be good for an evangelicalism that has made buildings, numbers, and paid staff its drugs for half a century.

We need new evangelicalism that learns from the past and listens more carefully to what God says about being His people in the midst of a powerful, idolatrous culture.

I’m not a prophet. My view of evangelicalism is not authoritative or infallible. I am certainly wrong in some of these predictions. But is there anyone who is observing evangelicalism in these times who does not sense that the future of our movement holds many dangers and much potential?

Michael Spencer is a writer and communicator living and working in a Christian community in Kentucky. He describes himself as “a postevangelical reformation Christian in search of a Jesus-shaped spirituality.” This essay is adapted from a series on his blog, InternetMonk.com .

Posted in Culture, Religion, Society | Tagged: , , , , | Leave a Comment »

Are Cities “Beyond Biology”?

Posted by jase on June 19, 2009

Casey Kazan over at The Daily Galaxy  posted the following interesting story.

Dr. Geoffrey West, President and Distinguished Professor of the Santa Fe Institute, recently led a team of scientists that has found that city growth driven by wealth creation increases at a rate that is faster than exponential. The only way to avoid collapse as a population outstrips the finite resources available to it is through constant cycles of innovation, which re-engineer the initial conditions of growth. But the greater the absolute population, the smaller the relative return on each such investment, so innovation must come ever faster.

Thus, the bigger the city, the faster life is; but the rate at which life gets faster must itself accelerate to maintain the city as a growing concern so much so that to maintain growth, major innovations must now occur on time-scales that are significantly shorter than a human lifespan.

“In this crucial sense cities are completely different from biological organisms, which slow down with size; their relative metabolism, growth rates, heart rates, and even rates of innovation – their evolutionary rates – systematically – and predictably – decrease with organismal size,” West said. “Several thousand years ago the evolution of social organizations in the form of cities brought a new dynamic to the planet that seems to be uniquely human: People actually do walk on average faster in larger cities whereas heart rates decrease as animal size increases.”

With the city mankind has created an “organism” operating beyond the bounds of biology.

Casey also posted this:

If you agree with urban-authority Jane Jacobs that the city is more important to the human species than the nation-state, you’ll enjoy Economist writer-at-large, Johnny Grimond, who argues that today the majority of the people on our planet will live in cities for the first time in history, and that going forward, human history will become urban history: homo sapiens has evolved into homo urbanus.

 

The backstory for this profound if not evolutionary shift in human behavior is that fact even in 1800 only 3% of the world’s population lived in cities.

Grimonds observations underscore the findings of Dr. Geoffrey West of the Santa Fe Institute, who led a team of scientists that has found that city growth driven by wealth creation increases at a rate that is faster than exponential. The only way to avoid collapse as a population outstrips the finite resources available to it is through constant cycles of innovation, which re-engineer the initial conditions of growth. But the greater the absolute population, the smaller the relative return on each such investment, so innovation must come ever faster.

With the city mankind has created an “organism” operating beyond the bounds of biology.

In this fascinating and brilliant analysis Grimond outlines the growth and importance of cities through history,”first, in the Fertile Crescent, the sweep of productive land that ran through Iraq, Syria, Jordan and Palestine, from which Jericho, Ur, Nineveh and Babylon (pictured above) would emerge. In time came other cities in other places: Harappa and Mohenjodaro in the Indus valley, Memphis and Thebes in Egypt, Yin and Shang cities in China, Mycenae in Greece, Knossos in Crete, Ugarit in Syria and, most spectacularly, Rome, the first great metropolis, which boasted, at its zenith in the third century AD, a population of more than 1million people.”

“It was in the city” Grimond points out, “that man was liberated from the tyranny of the soil and could develop skills, learn from other people, study, teach and develop the social arts that made country folk seem bumpkins. Homo urbanus did not just live in a town: he was urbane.”

Like the species of the planet, cities have mimiced biodiversity with some notable for their religious role such as latter-day Rome, or as the hub of an empire -Constantinople,or as centres of administration such as Mandarin Beijing, or political development in Medici Florence, or learning at Bologna and Fez, or commerce Hamburg, or a special product  such as Toledo. Like species of animal life, some flourished, some died, from forces as varied as conquest, plague, misgovernment or economic collapse.

Grimond sums up noting that “the sheer scale and speed of the current urban expansion make it unlike any of the big changes that have punctuated urban history. It mostly consists of poor people migrating in unprecedented numbers, and then producing babies on a similarly unprecedented scale. It is thus largely a phenomenon of poor and middle-income countries; the rich world has put most of its urbanization behind it.”

Posted in Culture, Society, Urbanism | Tagged: , , | Leave a Comment »

Japan’s “grass-eating men”

Posted by jase on June 18, 2009

One of Japans herbivore men

One of Japan's herbivore men

Ryoma Igarashi likes going for long drives through the mountains, taking photographs of Buddhist temples and exploring old neighborhoods. He’s just taken up gardening, growing radishes in a planter in his apartment. Until recently, Igarashi, a 27-year-old Japanese television presenter, would have been considered effeminate, even gay. Japanese men have long been expected to live like characters on Mad Men, chasing secretaries, drinking with the boys, and splurging on watches, golf, and new cars.

Today, Igarashi has a new identity (and plenty of company among young Japanese men) as one of the soushoku danshiliterally translated, “grass-eating boys.” Named for their lack of interest in sex and their preference for quieter, less competitive lives, Japan’s “herbivores” are provoking a national debate about how the country’s economic stagnation since the early 1990s has altered men’s behavior.

Newspapers, magazines, and television shows are newly fixated on the herbivores. “Have men gotten weaker?” was one theme of a recent TV talk show. “Herbivores Aren’t So Bad” is the title of a regular column on the Japanese Web site NB Online.

In this age of bromance and metrosexuals, why all the fuss? The short answer is that grass-eating men are alarming because they are the nexus between two of the biggest challenges facing Japanese society: the declining birth rate and anemic consumption. Herbivores represent an unspoken rebellion against many of the masculine, materialist values associated with Japan’s 1980s bubble economy. Media Shakers, a consulting company that is a subsidiary of Dentsu, the country’s largest advertising agency, estimates that 60 percent of men in their early 20s and at least 42 percent of men aged 23 to 34 consider themselves grass-eating men. Partner Agent, a Japanese dating agency, found in a survey that 61 percent of unmarried men in their 30s identified themselves as herbivores. Of the 1,000 single men in their 20s and 30s polled by Lifenet, a Japanese life-insurance company, 75 percent described themselves as grass-eating men.

Japanese companies are worried that herbivorous boys aren’t the status-conscious consumers their parents once were. They love to putter around the house. According to Media Shakers’ research, they are more likely to want to spend time by themselves or with close friends, more likely to shop for things to decorate their homes, and more likely to buy little luxuries than big-ticket items. They prefer vacationing in Japan to venturing abroad. They’re often close to their mothers and have female friends, but they’re in no rush to get married themselves, according to Maki Fukasawa, the Japanese editor and columnist who coined the term in NB Online in 2006.

Grass-eating boys’ commitment phobia is not the only thing that’s worrying Japanese women. Unlike earlier generations of Japanese men, they prefer not to make the first move, they like to split the bill, and they’re not particularly motivated by sex. “I spent the night at one guy’s house, and nothing happened—we just went to sleep!” moaned one incredulous woman on a TV program devoted to herbivores. “It’s like something’s missing with them,” said Yoko Yatsu, a 34-year-old housewife, in an interview. “If they were more normal, they’d be more interested in women. They’d at least want to talk to women.”

Shigeru Sakai of Media Shakers suggests that grass-eating men don’t pursue women because they are bad at expressing themselves. He attributes their poor communication skills to the fact that many grew up without siblings in households where both parents worked. “Because they had TVs, stereos and game consoles in their bedrooms, it became more common for them to shut themselves in their rooms when they got home and communicate less with their families, which left them with poor communication skills,” he wrote in an e-mail. (Japan has rarely needed its men to have sex as much as it does now. Low birth rates, combined with a lack of immigration, have caused the country’s population to shrink every year since 2005.)

It may be that Japan’s efforts to make the workplace more egalitarian planted the seeds for the grass-eating boys, says Fukasawa. In the wake of Japan’s 1985 Equal Employment Opportunity Law, women assumed greater responsibility at work, and the balance of power between the sexes began to shift. Though there are still significant barriers to career advancement for women, a new breed of female executive who could party almost as hard as her male colleagues emerged. Office lechery, which had been socially acceptable, became stigmatized as seku hara, or sexual harassment.

But it was the bursting of Japan’s bubble in the early 1990s, coupled with this shift in the social landscape, that made the old model of Japanese manhood unsustainable. Before the bubble collapsed, Japanese companies offered jobs for life. Salarymen who knew exactly where their next paycheck was coming from were more confident buying a Tiffany necklace or an expensive French dinner for their girlfriend. Now, nearly 40 percent of Japanese work in nonstaff positions with much less job security.

“When the economy was good, Japanese men had only one lifestyle choice: They joined a company after they graduated from college, got married, bought a car, and regularly replaced it with a new one,” says Fukasawa. “Men today simply can’t live that stereotypical ‘happy’ life.”

Yoto Hosho, a 22-year-old college dropout who considers himself and most of his friends herbivores, believes the term describes a diverse group of men who have no desire to live up to traditional social expectations in their relationships with women, their jobs, or anything else. “We don’t care at all what people think about how we live,” he says.

Many of Hosho’s friends spend so much time playing computer games that they prefer the company of cyber women to the real thing. And the Internet, he says, has helped make alternative lifestyles more acceptable. Hosho believes that the lines between men and women in his generation have blurred. He points to the popularity of “boys love,” a genre of manga and novels written for women about romantic relationships between men that has spawned its own line of videos, computer games, magazines, and cafes where women dress as men.

Fukasawa contends that while some grass-eating men may be gay, many are not. Nor are they metrosexuals. Rather, their behavior reflects a rejection of both the traditional Japanese definition of masculinity and what she calls the West’s “commercialization” of relationships, under which men needed to be macho and purchase products to win a woman’s affection. Some Western concepts, like going to dinner parties as a couple, never fit easily into Japanese culture, she says. Others never even made it into the language—the term “ladies first,” for instance, is usually said in English in Japan. During Japan’s bubble economy, “Japanese people had to live according to both Western standards and Japanese standards,” says Fukasawa. “That trend has run its course.”

Japanese women are not taking the herbivores’ indifference lightly. In response to the herbivorous boys’ tepidity, “carnivorous girls” are taking matters into their own hands, pursuing men more aggressively. Also known as “hunters,” these women could be seen as Japan’s version of America’s cougars.

While many Japanese women might disagree, Fukasawa sees grass-eating boys as a positive development for Japanese society. She notes that before World War II, herbivores were more common: Novelists such as Osamu Dazai and Soseki Natsume would have been considered grass-eating boys. But in the postwar economic boom, men became increasingly macho, increasingly hungry for products to mark their personal economic progress. Young Japanese men today are choosing to have less to prove.

More on Japan’s herbivore men

Posted in Culture, Men, Society | Tagged: , , | 1 Comment »