Saturday, August 30, 2008

Appiah y Knobe sobre politica.

Aqui.

La filosofia de la neuroeconomia.

La neuroeconomia mira al cerebro para saber por qué decidimos como decidimos en contextos economicos.

Para ello utiliza tecnicas de neuroimagen para conocer que regiones del cerebro median las decisiones economicas.

Normalmente la economia convencional no habla de sistemas nerviosos y su actividad para crear sus teorias sobre el comportamiento economico.

Pero en ultima instancia toda teoria economica convencional al no prestar atencion al sistema nervioso no nos estara hablando de que mecanismos biologicos limitan, imponen, o constriñen, el comportamiento economico.

Pero la neuroeconomia per se tampoco nos dice por qué, ni cómo, ha evolucionado la maquinaria de la toma de decisiones de aquellos organismos con sistemas nerviosos complejos para responder flexiblemente ante entornos variables, ni ha creado una teoria robusta que permita hacer predicciones neuroeconomicas (¡por ahora!).

En otras palabras, la neuroeocnomia esta a falta de unos solidos fundamentos filosoficos.

Por supuesto, tampoco la neuroeconomia ha explicado todavia cómo y por qué la evolucion ha podido crear sistemas nerviosos que operen buscando la eficiencia (o de qué modo se han podido implementar los objetivos_"goals"_ fisologicamente) ni cómo la evolucion ha podido diseñar cerebros para responder a entornos cambiantes por la accion de otros agentes, cambio endogeno, como si lo ha hecho para entornos que cambian, cambio exogeno (cambio exogeno Vs cambio endogeno).

Tampoco la neuroeconomia explica el equilibrio entre un cerebro grande y energeticamente costoso y el comportamiento complejo que se deriva cuanto mas grande sea el cerebro (la teoria del capital cerebral).

En terminos economicos estos interrogantes se podrian formular de la siguiente manera: por que la evolucion ha invertido tanto en construir un cerebro tan complejo que para mantenerlo sale muy caro energeticamente (los animales necesitan reponer energias, cargar las baterias que se consumen, y para eso se alimentan. Pero cuando buscan alimento consumen energia, y cuanto mas complejo sea su sistema nervioso mas energia consumen, y que se disipa sin rendimiento)

Para paliar todos estos interrogantes y fundamentar conceptualmente la neuroeconomia ya esta en marcha, la "filosofia de la neuroconomia".

Pincha aqui.

Thursday, August 28, 2008

La mirada directa modula los sistemas cerebrales de aproximacion-evitacion.

No hay duda de que la cara y las expresiones faciales de la emocion juegan un papel importante en la interacion social humana.

El rostro humano puede trasmitir informacion sobre las emociones, interes, motivacion, la direcion de la atencion, asi como otras señales.

El profesor Jari Hietanen del departamento de psicologia de la Universidad de Tampere, ha mostrado en un estudio (del que es autor principal) como la mirada directa dirigida al rostro del interlocutor, o la mirada desviada, modula los sistemas cerebrales que nos motivan para acercarnos, o evitar a los demas, respectivamente.

La mirada directa al rostro de otro activa en el sujeto observado el lobulo prefrontal del hemisferio izquierdo, que esta asociado con procesos motivacionales de acercamiento, mientras que la mirada desviada, no dirigida al rostro de otro, sino a otro punto del campo visual, activa en el sujeto que observa la mirada el lobulo prefrontal del hemisferio derecho, que esta asociado con procesos motivacionales de evitacion o rechazo.

Segun los investigadores, esta es la primera vez que se muestra con datos fisiologicos como la mirada afecta las reacciones motivacionales de las personas.

Resumen gratuito aqui.

Wednesday, August 27, 2008

¿La paradoja del concepto de mercado en economia?

El principio fundamental que subyace en la actuacion de los mercados en economia como mecanismos de asignacion de precios, o instrumentos para conferir valor a los consumibles, tangibles o intangibles, bienes etc. es que en principio todo el mundo puede participar y el potencial de las ganancias (¡o las perdidas!) es lo que crea los grandes incentivos para que las personas busquen la mejor informacion disponible.

O sea, que cuando yo invierto en bolsa y pongo mi dinero en la apuesta de que una empresa marchara bien, beneficiandose ella y yo (si todo sale segun lo esperado), esta actuando un mecanismo que facilita la creatividad, la productividad en ambos actores.

En mi, porque me mueve a conseguir el dinero y jugarmela, y a la empresa en cumplir su mision (producir).

Pero claro, cuando la informacion esta distribuida de forma asimetrica entre los agentes economicos(entre la empresa y yo, y ademas hay organismos que velan por hacer transparente dicha informacion, lo cual ya indica que hay mas de una carrera armamentistica por engañar al otro entre las dos partes[o quizas tres o multipartes] pudiendo haber conflictos de intereses...) es muy dificil saber como el mecanismo se autoregula.

Toda empresa puede hacer ingenieria fiscal, inflar sus libros de cuentas, actuar de forma inmoral e ilegal, y timarme (o llevarme a la ruina) cuando habia pensado que he encontrado esa informacion verdadera en la que basar mi decision para invertir.

Pero claro, que otro mecanismo mas alla de los mercados puede hacer promover la innovacion social productiva.

No encuentro otro.

Como diria aquel, mas vale bueno conocido que mejor por conocer.

¿Que otro mecanismo puede encontrar la economia para dar valor a las cosas y promover la productividad y la innovacion social?

Tuesday, August 26, 2008

Cita del dia.

"... nuestro sistema visual tricromatico es un dispositivo inventado por ciertos arboles frutales para propagarse a si mismos"
-John Mollon-

Monday, August 25, 2008

El futuro de la filosofia.

Brian Leiter nos habla del estado de la vocacion filosofica aqui.

La neurobiologia del sueño.

¿Por qué dormimos?

¿Cuál es la función del sueño?

El sueño sigue siendo uno de los grandes misterios de la neurobiologia. Aunque se sabe que es esencial para la vida, el sueño es un fenomeno misterioso.

La teoria medica y neurobiologica privilegiada y generalmente aceptada hasta la fecha, era la teoria de la recuperacion endocrinologica, o teoria restaurativa.

Segun esta teoria el sueño deberia restaurar los niveles de neurotrasmisores y hormonales necesarios para su uso despues de haberlos "gastado" durante la vigilia.

Pero esto no es asi. Se sabe que no hay ningun decrecimiento en los niveles de hormonas o neurotrasmisores para que el sueño tenga que tener lugar.

"Durante el sueño un animal deja de de buscar alimento, deja de cuidar de su progenie, deja de procrear, o no puede evitar ser objeto de otros depredadores...luego el sueño debe cumplir una funcion muy importante" dicen los autores de un articulo que tras emplear tecnicas geneticas a un modelo animal (la mosca de la fruta, Drosophila melanogaster) han podido establecer paralelos entre el sueño humano y el descanso de la mosca a partir de varios componentes moleculares compartidos.

Resumen gratuito aqui, porque si no tienes subscripcion a la revista no podras acceder al texto completo. (Acceso abierto a la ciencia ¡ya!)

Actualizacion: en el blog Mind Hacks hay una entrada sobre la utilidad del sueño qe tambien descrIbe las investigaciones sobre el sueño de Giulio Tononi.

Sunday, August 24, 2008

Empatia.

En el estudio de la biologia del comportamiento prosocial (comportamiento altruistico o cooperativo) la empatia es el pinaculo.

La empatia, etimologia derivada del vocablo aleman Einfühlung y no confundir con simpatia, o la capacidad para sentir una reaccion emocional similar en respuesta a la emocion de otro, que aunque conceptualmente relacionadas estan logicamente diferenciadas; es la capacidad de sentir la emocion que otro siente, pero ademas la capacidad de ponerse en el lugar del otro (sentirse concernido de ahi que "em" en griego signfica_en_ y "pathos"_pasion_) y preguntarse por qué esta sintiendo lo que siente e intentar modificar su estado si fuera el caso (como cuando vemos a alguien que se ha caido y se ha roto la pierna. La simpatia nos evocara una sensacion emotiva compartida, un interes por el otro, pero solo la empatia nos pondra en el lugar del otro y nos hara reaccionar en consecuencia).

Una defincion mucho mas tecnica de empatia que recoge las causas ultimas (la evolucion filogenetica del mecanismo de la empatia y su funcion, o para que sirve) y las causas proximas (los mecanismos neuronales que median la empatia) asi como una revision estructural muy ambiciosa que intenta organizar la literatura existente y las distintas teorias sobre la empatia en psicologia social, primatologia, filosofia, y neurociencia; la dieron Preston y de Waal 2002 en un articulo abierto a comentarios por expertos cuando dijeron que la empatia es:

"cualquier proceso donde la percepcion del estado de un objeto genera en el sujeto un estado que es mas aplicable al estado o situacion del objeto que al estado previo del sujeto o su situacion"

Esta definicion de la empatia esta basada en el modelo ciclico de la percepcion y la accion.

Sucintamente, el modelo ciclico de la percepcion y la accion es la idea de una interaccion entre la percepcion y la accion donde los mecansimos perceptuales se solapan con los mecansimos motores compartiendo rutas similares.

Este modelo tiene evidencias muy fuertes gracias al descubrimiento de las "neuronas espejo", primero en el cerebro del macaco y luego en el cerebro del ser humano.

Las neuronas espejo son un grupo de neuronas muy especiales que se activan cuando el sujeto percibe una accion, y cuando el sujeto realiza la misma accion.

Diversos estudios neurocientificos con neuroimagen han examinado los mecanismos neuronales de la empatia hacia el dolor de otro, y se ha podido comprobar como los mecanismos neuronales que subyacen a la perepcion del dolor en uno mismo son los mismos que se activan cuando percibimos el dolor de un ser querido.

Cuando uno recibe un estimulo nocivo las areas activadas son la insula (bilateralmente en ambos hemisferios), corteza anterior cingulada, el cerebelo y la corteza somatosensorial, la llamda "neuromatriz del dolor".

Cuando emaptizamos con el dolor de otro, de esta neuromatriz del dolor, solo responden las areas encargadas del procesamiento afectivo, la corteza anterior cingulada, y no las areas encargadas del procesamiento sensorial del dolor, o lo que es lo mismo, la empatia por el dolor de otro envuelve componentes afectivos pero no snsoriales del dolor (Vease, Singer et al. 2004).

Sin embargo, nuevos estudios en las repuestas empaticas hacia el dolor de otros muestran como incluso no es necesario la observacion de la aplicacion de estimulos nocivos (la percepcion de otro sufriendo), sino que el mismo estado de empatia modula la percepcion del dolor inclusive en uno mismo.

Es decir, por el mero hecho de empatizar con otro nuestra perepcion del dolor, en nostros mismos, es mucha mas intensa porque de acuerdo a la definicion de empatia desde el modelo ciclico de la percepcion y la accion de Preston y de Waal 2002, la empatia se produce porque en el cerebro del empatizante se produce una activacion automatica de un estado similar al empatizado (el sujeto observado que sufre y por el que tenemos compasion).

Siguiendo esta logica, el mismo estado emocional negativo cuando sufrimos nosotros reactiva la representacion del dolor que teniamos del otro (que segun el modelo ciclico de la percepcion y la accion son las mismas areas de procesamiento del dolor en uno mismo), que por consiguiente, a su vez, nos produce una mayor experiencia de dolor.

Hay un proverbio que dice "quien bien te quiere te hara llorar".

Parece que una vez mas la sabidura popular tiene su razon.

Pincha aqui.

Saturday, August 23, 2008

Un poquito de rock and roll... tampoco viene mal.


Closer Video


Por cierto, TRAVIS tocan hoy en la Aste Nagusia (una semana entera en agosto, 16 al 24, de fiestas en la calle) en Bilbo. Alli estaremos.

Dopamina, dopamina, dopamina...

La dopamina es el neurotrasmisor de cabecera de los neurocientificos.

¡Esta de moda!

Es muy probable que despues de que el farmacologo Arvid Carlsson por primera vez descubriera la dopamina en el sistema nervioso y se llevara el Nobel, muy pronto le siga en su camino a Estocolmo el neurocientifico Wolfram Schultz (uno de los maximos exponentes de la neuroeconomia), por sus estudios neurofisiologicos de las celulas dopaminergicas, que nos ha revelado como este neurotrasmisor es un "todo terreno".

La dopamina y sus patofisiologias estan envueltas en enfermedades neurodegenerativas como el Parkinson, adicciones, trastornos mentales y trastornos del movimiento, participa en el procesamiento de la recompensa, en la toma de decisiones, en el aprendizaje, en la atencion...

Pero lo que mas podria interesar a los filosofos es que este neurotrasmisor estudiado cuantitativamente por la teoria del refuerzo condicionado, es el substrato que esta detras de la valoracion, o los valores (cómo las ideas adquieren un estatus de guia en la politica del comportamiento ya sea en la moral, o en la politica etc.)

El ultimo articulo empirico del neurocientifico computacional Stephen Grossberg nos muestra con la introduccion de un modelo conexionista llamado MOTIVATOR (acronimo en ingles para Matching Objects To Internal VAlues Triggers Option Revaluations) como la dopamina modula fuertemente el aprendizaje condicionado vinculando a la amigdala.

Resumen.

Animals are motivated to choose environmental options that can best satisfy current needs. To explain such choices, this paper introduces the MOTIVATOR (Matching Objects To Internal VAlues Triggers Option Revaluations) neural model. MOTIVATOR describes cognitiveemotional interactions between higher-order sensory cortices and an evaluative neuraxis composed of the hypothalamus, amygdala, and orbitofrontal cortex. Given a conditioned stimulus (CS), the model amygdala and lateral hypothalamus interact to calculate the expected current value of the subjective outcome that the CS predicts, constrained by the current state of deprivation or satiation. The amygdala relays the expected value information to orbitofrontal cells that receive inputs from anterior inferotemporal cells, and medial orbitofrontal cells that receive inputs from rhinal cortex. The activations of these orbitofrontal cells code the subjective values of objects. These values guide behavioral choices. The model basal ganglia detect errors in CS-specific predictions of the value and timing of rewards. Excitatory inputs from the pedunculopontine nucleus interact with timed inhibitory inputs from model striosomes in the ventral striatum to regulate dopamine burst and dip responses from cells in the substantia nigra pars compacta and ventral tegmental area. Learning in cortical and striatal regions is strongly modulated by dopamine. The model is used to address tasks that examine food-specific satiety, Pavlovian conditioning, reinforcer devaluation, and simultaneous visual discrimination. Model simulations successfully reproduce discharge dynamics of known cell types, including signals that predict saccadic reaction times and CS-dependent changes in systolic blood pressure.


Articulo aqui.

Friday, August 22, 2008

El futuro de la neurotecnologia.

El experto en neurotecnologia, Zach Lynch, explica este nuevo campo de estudio (la neurotecnologia) y nos cuenta que nos depara el futuro en el tratamiento de desordenes cerebrales.

Monday, May 21, 2006
The future of Neurotechnology.

Emily Singer.

Despite huge leaps in our understanding of the inner workings of the brain, many of the most popular therapies for psychiatric and neurological disorders are just new versions of older drugs.

Now experts say new technologies, such as electrical stimulators, could revolutionize the treatment of brain disorders. Scientists hope that within the next 5 to 20 years, these technologies will deliver some of the most sought-after breakthroughs in neuroscience, such as a truly effective treatment for Alzheimer's disease or an alternative therapy for the large percentage of patients resistant to antidepressant drugs.

New treatments are already beginning to emerge, including brain stimulation devices to treat epilepsy, Parkinson's disease, depression, and even obesity, as well as drugs to target nerve cell growth. At the neurotechnology industry meeting in San Francisco last week, Zack Lynch gave Technology Review the lowdown on some of the new drugs and devices that are emerging from this growing field. Lynch is on the leadership board of MIT's McGovern Institute for Brain Research and is managing director of Neuroinsights.

NeuroInsights, a market analysis company based in San Francisco.


Technology Review: Why neurotechnology?

Zack Lynch: Neuroscience is now moving from a science to an industry. What we're really looking at is an evolution: researchers are now going beyond basic science and developing more effective therapeutics for brain-related illnesses.

The need is huge. One in four people worldwide suffer from a brain-related illness, which costs a trillion dollars a year in indirect and direct economic costs. We all know someone who is affected. That burden will continue to grow with the aging population. We have more people, and more people living longer -- it's a multiplier effect.

TR: Less than 10 years ago, neuroscientists made an exciting discovery. They found that the birth of new neurons, once thought to be confined to the developing brain, continues in adulthood. Now we know that that process, known as neurogenesis, may play a role in treating a number of diseases, including depression. How will that discovery affect development of new therapies?

ZL: Neurogenesis promises a potential preventative or nearly cure capability. Right now what we're doing is palliative, rather than being able to target the mechanisms and potentially regrow neurons. But it's an area that's far out. The technology is just getting started.

TR: Research suggests that antidepressants are effective partly because they stimulate neurogenesis. So companies such as BrainCells, based in San Diego, CA, are screening compounds that promote growth of neural stem cells in the brain. They say these drugs could bring new therapies for depression and, eventually, neurodegenerative diseases.

ZL: It's an exciting area, and the investment community is certainly interested. But the jury is still out.

TR: We're also starting to see a new kind of therapy for brain-related illnesses -- electrical stimulation. Various types of stimulation devices are now on the market to treat epilepsy, depression, and Parkinson's disease. What are some of the near- and far-term technologies we'll see with this kind of device?

ZL: We're seeing explosive growth in this area because scientists are overcoming many of the hurdles in this area. One example is longer battery life, so devices don't have to be surgically implanted every five years. Researchers are also developing much smaller devices. Advanced Bionics, for example, has a next-generation stimulator in trials for migraines.

In the neurodevice space, the obesity market is coming on strong. Several companies are working on this, including Medtronics and Leptos Biomedical. In obesity, even a small benefit is a breakthrough, because gastric bypass surgery [one of the most common treatments for morbid obesity] is so invasive.

In the next 10 years, I think we'll start to see a combination of technologies, like maybe a brain stimulator that releases L-dopa [a treatment for Parkinson's disease]. Whether that's viable is a whole other question, but that possibility is there because of the microelectronics revolution.

The real breakthrough will come from work on new electrodes. This will transform neurostimulator applications. With these technologies, you can create noninvasive devices and target very specific parts of the brain. It's like going from a Model T to a Ferrari. Those technologies will present the real competition for drugs.

Thursday, August 21, 2008

Las bases evolutivas de la prostitucion.


En todas las sociedades y culturas los hombres con un estatus socioeconomico alto tienen un mayor exito reproductivo.

Las preferencias que los hombres y las mujeres expresan a la hora de elegir pareja muestran las divergentes estrategias reproductivas de cada uno de los sexos.

Aunque ambos sexos comparten preferencias como la agradabilidad, inteligencia etc. que ha de tener su potencial pareja, difieren en las preferencias a la hora del intercambio sexual.

Los hombres buscan signos fiables de fecundidad y las mujeres prefieren signos que indiquen compromiso en la relacion a largo plazo e inversion parental.

La institucion del matrimanio en sociedades monogamas instaura un mecanismo fiable de intercambio sexual.

A los hombres se les confiere un intercambio sexual exclusivo con sus mujeres, pero se espera que contribuyan a la prosperidad de la familia y su socializacion.

El intercambio de recursos por sexo, lo que los cientificos llaman "regalos nupciales" tiene lugar en una gran cantidad de especies incluido en los seres humanos.

El estudio del psicologo evolucionista Daniel Kruger ha mostrado como incluso en sociedades post-industriales con estudiantes de universidades de elite (lo que sugiere que tienen un poder adquisitivo superior a la media)como participantes del experimento, este tipo de comportamiento, el intercambio de recursos por sexo, es una disposicion seleccionada por la evolucion.

Los hombres (estudiantes de universidad de entre 18 y 26 años) tenian mayor probabilidad de intercambiar recursos por sexo que las mujeres (estudiantes de universidad de entre 18 y 26 años) tenian de intercambiar sexo por recursos.

Solo 9 mujeres en comparacion con 14 hombres informaron que otros les habian ofrecido sexo por recursos.

Este hallazgo, realmente sorprendente dado el estatus socioeconomico de la muestra, aunque estadisticamente bajo, muestra la diferencia sexual en la probabilidad de intercambio sexual por recursos y abre las puertas para una futura profundizacion en la literatura sobre eleccion de pareja en el hombre y los "regalos nucpciales" en otras especies, y por supuesto, da fe de la importancia de un marco evolutivo para entender el comportamiento y la psicologia humana.

A pesar de las connotaciones politicas y sociales del concepto de prostitucion (femenina) en las sociedades humanas, parece ser que como en otras especies, el intercambio de sexo por recursos sirve como una estrategia evolutiva estable.


Resumen

Adults in many species exhibit exchanges in reproductively relevant currencies,where males trade resources for sexual relations with females, and females have sex with males in exchange for provisioning. These exchanges can occur outside of a long-term partnership, which itself could be considered a commitment to the accessibility of reproductive currencies provided by each partner. The current study investigated whether young adults who are not in acute need of resources intentionally attempt reproductive currency exchanges outside of dating relationships or formal committed relationships such as marriage; and whether young adults have awareness of being the target of such attempts made by others. College students (N = 475) completed a brief survey assessing their own attempts to exchange reproductively relevant currencies, as well as others’ attempts to make these exchanges with them. Men were more likely to report making attempts to trade investment for sex and women were more likely to report attempted trades of sex for investment. Participants’ experiences of exchange attempts initiated by other individuals mirrored these patterns. Men were more likely to report another individual trying to trade sex for their investment, and women were more likely to report another individual trying to trade investment for sex with them. The vast majority of these attempted exchanges took place outside of existing relationships, although a small portion did lead to short or long term relationships.

Articulo aqui.

Wednesday, August 20, 2008

Fundamentos darwinianos de la sociedad.

La teoria de la evolucion por seleccion natural es el marco meta-teoretico que puede unificar las ciencias sociales tal y como lo ha hecho con la biologia y la psicologia.

Hoy tenenos una gran cantidad de datos que confirman la influencia de multiples fuerzas evolutivas en la formacion de nuestras capacidades cognitivas y comportamientos humanos.

Es posible pensar que los mecanismos de la cognicion y comportamiento son el resultado de adaptaciones a lo largo de la historia evolutiva humana en un entorno ancestral donde los problemas eran recurrentes de generacion en generacion.

En este sentido, es logico pensar que tanto las conductas prosociales como las antisociales pueden ser estrategias resultantes de un conjuunto de mecanismos cognitivos como comportamentales seleccionados porque resultaban ser ventajosos para el individuo, ayer y hoy.

El homicidio, la violacion, o el robo pueden ser vistos como estrategias de comportamiento estables seleccionadas para cumplir una funcion adaptativa, para resolver problemas en entornos ancestrales donde recursos materiales como de pareja eran escasos (Buss 2006, Thornhill 2000, Cohen y Machalek 1988) y que aun se mantienen en nuestro entorno moderno y que llamamos de forma general "crimenes".

Con esto no se quiere decir que es inevitable el robo, la violacion etc. pero tampoco tenemos porque caer en la simplificacion de que los crimenes son el producto de una patologia.

El transito del individuo y sus adaptaciones, a la sociedad con sus instituciones, no es forzado, ni en falso.

Instituciones como el derecho, la sociedad, y la cultura en su totalidad, emergen de comportamientos y disposiciones innatas esculpidas durante la evolucion humana.

El derecho que tiene como mision impedir que la gente se comporte tal y como la haria en ausencia de la ley, solo puede cumplir su mision si este se basa realisticamente en las causas del comportamiento humano, es decir, si aplica un modelo realista del comportamiento humano.

El derecho no puede pedir nada mas alla de lo que el hombre no sea capaz de hacer y en la medida en que tengamos un mejor entendimiento de las causas del comportamiento humano el derecho sera mas efectivo y eficiente.

La sociedad y la cultura no son mas que el reflejo amplificado de los patrones de organizacion social derivada de la conducta de los individuos y los grupos, y como tal, la sociedad y la cultura estan diseñadas implicitamente bajo la logica evolutiva darwiniana.

Tuesday, August 19, 2008

Tabaco y cerebro.



El tabaco en una de sus formas de administracion, la fumada, debido a uno de sus componentes, la nicotina, es una de las drogas mas adictivas que ha influido la conducta del ser humano desde que hace cinco siglos se extendiera por todo el mundo tras la conquista del Nuevo Mundo (aunque su cultivo, originario de America, se conoce que data de por lo menos mas de tres mil o cinco mil años).

La filosofia del tabaco nos habla de una droga que ha tenido una imagen cultural de gran fuerza simbolica en nuestras sociedades (Incluso Woody Allen en su pelicula "Manhattan" dice que tener un cigarro te hace mas guapo, aunque él no trague el humo porque dice que causa cancer: Ja, ja, ja...)

Fumar tabaco era signo de sofisticacion cuando hablamos de una fumadora y de virilidad cuando hablamos de un fumador. Con un cigarro en la mano los adolescentes se veian hombres. Inclusive el habito del tabaco ha sido asociado a la libertad (signo de rebeldia contra las mores patriarcales en la mujer) y la democracia (de hecho multiples tiranias prohibian el uso del tabaco a sus subditos en los reinos mayas porque lo asociaban a la subersion).

El tabaco se ha visto como el creador de virtudes como la paciencia y la reflexividad (¡virtudes todas ellas dignas de todo detective o escritor!).

Pero en realidad, el tabaco para el hombre moderno es uno de los mayores problemas de salud publica en todo el mundo con alrededor de un ratio de 6 de cada 10 consumidores de tabaco con problems clinico-medicos, algunos de ellos que llevan a la muerte.

Los detalles neurofarmacologicos de la dependencia y mantenimiento del habito del tabaco se sabe que se deben a la activacion y estimulacion del sistema dopaminergico que actua reforzando el habito del tabaco al considerar a la nicotina como una recompensa. De hecho, uno de los pocos tratamientos de deshabituacion tabaquica, el bubopropion, inhibe la dopamina en el cerebro.

Pero segun nuevos avances en la neurobiologia de la dependencia a la nicotina, debe de haber una accion "extra" en otros sistemas neuromoduladores (los sistemas neuromodulares son aquellos sistemas del cerebro que reciben este nombre porque comparten dos caracteriticas: los impulsos neuronales, o potencial de acion, actuan a velocidades lentas de trasmision en comparacion con otros procesos cerebrales y tienen proyecciones con varias regiones del cerebro y la medula espinal)
para explicar la dependencia al tabaco.

Entre varios de estos sistemas neuromoduladores que se ven afectados por la accion del tabaco se baraja la dopamina, esencialmente, y la noradrenalina, la serotonina, la acetilcolina, los endocanabinoides y el sistema opiaceo endogeno (endorfinas).

Pincha aqui para saber porque es tan facil caer en el tabaco, pero tan dificil dejarlo.

Sunday, August 17, 2008

Liberales Vs. conservadores: las diferencias de perspectiva moral en politica.















Pincha en cualquiera de las imagenes para ver la discusion entre el psicologo Jonathan Haidt (abajo a la izquierda) y el filosofo Joshua Knobe, sobre las diferencias entre liberales y conservadores.

Saturday, August 16, 2008

La esquizofrenia es el peaje que pagamos por tener un cerebro complejo.

La psiquiatría evolucionista es la rama de la psiquiatría que estudia las enfermedades mentales desde el prisma de la evolución: cuál es el origen y naturaleza de las enfermedades mentales, por qué enfermamos, y qué presiones selectivas han constreñido en exceso nuestras caracteristicas mentales.

Una de las preguntas fundacionales de la psiquiatria evolucionista es por que los genes responsables de enfermedades que debilitan la funcionalidad y supervivencia de los individuos y que son heredables, siguen manteniendose en el acervo genetico (gene pool).

Para los psiquiatras evolucionistas las enfermedades mentales son un enigma, y porque no decirlo, una paradoja.

Desde el punto de vista del marco teorico darwiniano tenemos una disyuntiva esclusiva (o quien sabe si inclusiva): o bien, las enfermedaddes mentales confieren alguna ventaja adaptativa, son fruto de una seleccion compensatoria (por ejemplo, para el caso de la esquizofrenia varios autores, entre ellos principalmente el psiquiatra oxoniense T. Crow, han expuesto que los genes envueltos en la asimetria cerebral que han dado lugar a la emergencia del lenguaje localizado en un hemisferio dominante, usualmente el izquierdo, y que lateraliza funciones cogntivas dando lugar tambien a la destreza manual [la mano derecha para la mayoria de la poblacion] son al mismo tiempo los que nos hacen pagar un precio por dichas capacidades como el lenguaje. El precio es la esquizofrenia),o bien, son un sub-producto de la evolucion de algun otro rasgo.

Aunque la psiquiatria evolucionista tampoco esta exenta de criticas.

Los criticos de la psiquiatria evolucionista critican que no se conoce la historia natural de las enfermedades mentales como para afirmar que hayan sido naturalmente seleccionadas. Para el caso de la esquizofrenia al no haber registro escrito de sus sintamos en autores de la antiguedad, como si los hay para la depresion o desordenes de personalidad, hace pensar que la esquizofrenia es una invencion medica.

Tambien critican que muchas veces, en terminos filosoficos, las enfermedades mentales no son consideradas categorias de tipo natural (entidades en las que subyacen causas biologicas y que estan bien delimitadas)ya que sus sintomas son heterogeneos y que son reificados bajo el mismo paraguas con un nombre que los engloba.

Esta corriente de critica sociocultural contra las enfermedades mentales se ha conocido como movimiento antipsiquiatria.

Evidentemente, (y sino le recomendaria a cualquier seguidor de la corriente antipsiquiatria que se pase por un sanatorio o por la seccion de psiquiatria de un hospital)para el caso de la esquizofrenia esta tiene una etiologia genetica, aunque por el momento a pesar de los muchos titulares de prensa anunciando el descubrimiento de los genes detras de la esquizofrenia, dado los actuales metodos y tecnicas de analisis genetico y el caracter poligenetico de la enfermedad (multiples genes implicados), es muy complicado saber que genes estan envueltos en esta patologia que afecta a 1 persona de cada 100.

Un nuevo estudio que comparte el espiritu de la psiquiatria evolucionista intenta mirar al proceso de consumo de energia del cerebro humano para saber si los patrones de consumo energetico y metabolismo de este, asi como sus anormalidades, estan detras de las funciones y disfunciones cognitivas en la enfermedad mental.

Segun este estudio liderado por el biologo evolucionista Philipp Khaitovich del Instituto de Ciencias Biologicas de Shangai, la esquizofrenia puede ser el resultado de los cambios geneticos que han llevado al cerebro humano a expandirse enormemente durante su evolucion.

Los investigadores examinaron varios genes de un biobanco (base de datos) cuya secuenciacion es publica y que estaban clasificados en 22 categorias segun su funcion como positivamente selecionados, y vieron como seis de estas categorias de genes estan implicados en la patologia de la esquizofrenia.

Los investigadores se han centrado en la neuroenergetica del cerebro, el estudio de los procesos de consumo de energia y metabolismo del cerebro humano, o en otras palabras, la economia energetica del cerebro para realizar sus funciones; y han visto mediante el uso de una tecnica conocida como espectroscopia por resonancia magenetica nuclear que varios metabolitos difieren en sus niveles de concentracion en importantes zonas del cerebro de pacientes fallecidos con esquizofrenia diagnosticada y sujetos sin esquizofrenia.

Los autores sugieren que el cerebro que consume el 20% de la energia metabolica del cuerpo humano (representando solo el 2% de la totalidad de organos)"funciona al limite de sus posibilidades metabolicas".

Tan cerca del limite, dicen los autores, que pequeños cambios en genes relacionados con la funcion metabolica puede causar problemas mentales.

Articulo aqui.

Friday, August 15, 2008

La politizacion de la ciencia.

Esta entrada del blog Neurophilosophy es interesantisima.

No solo expone las miserias de las grandes casas de publicacion cientifica en cuanto al negocio que hacen al poner precio a la informacion, al conocimiento cientifico, que debe ser casi por IMPERATIVO MORAL, publico, y abierto, y a disposicion de todo el mundo.

Tambien, el autor del blog nos regala una comentario critico que hizo en el año 2002, dirigido a la misma casa de publicacion cientifica, en relacion al acto politico que en el año 2001 esta casa cometio censurando una publicacion cientifica por presentar datos cientificos que tenian repercusiones politicas.

El articulo que hizo llevarse las manos a la cabeza a medio mundo por las implicaciones geopoliticas del texto, sus interpretaciones etc., es este.

En el articulo se afirma con datos cientificos evaluados a doble ciego en una revista "peer review" de gran impacto estadistico (que finalmente fue censurado y retirado fisicamente de los canales de publicacion) que los judios y palestinos estan geneticamente relacionados. (¡como si esto fuera sorpresa!)

Pero el quid de la cuestion esta en que este acto ilegal de censura cientifica se hace en virtud de creencias religiosas que a su vez llevan a conflictos entre pueblos.

Como dice Mo, el autor del blog,:

"This is suppression of data that shows that Palestinians are human (and not the "cockroaches" or "two-legged beasts" they have been described as by Israeli prime ministers in the past) and not dissimilar to Jews. This revelation therefore reduces the credibility of the claim that Jews are God's chosen people, and that Israel is their homeland, an argument that uses biblical connotations to justify Israeli actions in the occupied territories"

David Attenborough, Richard Dawkins, Richard Leakey y Jane Goodall hablan sobre el futuro de nuestro planeta.

Thursday, August 14, 2008

Gramática Moral y Jurisprudencia Intuitiva.

El profesor de la facultad de derecho de la Universidad de Georgetown, John Mikhail, es un especialista en filosofía legal, lingüistica, filosofía moral y ciencia cognitiva (filosofía, antropología, economía, IA, lingüistica, neurociencia), y tiene como misión la fusión de ideas provenientes de disciplinas dispares para entender mejor el comportamiento moral, los fundamentos de los derechos humanos en un "instinto moral" innato, y la construcción de la jurisprudencia.

Partiendo de la "analogía lingúistica de Rawls": la idea de que se puede estudiar el sentido de la justicia de la misma forma que los lingüistas estudian el lenguaje, y la noción de una "gramática moral universal": un conjunto de principios innatos que gobiernan la adquisición y la expresión del comportamiento moral de la misma forma que la gramática universal de Chomsky en lingüistica sirve para explicar la adquisición y expresion del lenguaje; el profesor Mikhail es uno de los pioneros en la reciente revolución en el estudio multidisciplinar de la moral y lo que nos hace comportarnos bien o mal.

Recientemente, ha publicado un artículo en una base de datos de estudios sociales, en la que extiende dichas ideas y examina la posibilidad de una plataforma de inteligencia artificial, o computadora, que pudiera hacer juicios morales, así como el modo de razonamiento, las intuiciones y justificaciones de los miembros de un jurado, o lo que se conoce como, jurisprudencia intuitiva.

Resumen.

Could a computer be programmed to make moral judgments about cases of intentional harm and unreasonable risk that match those judgments people already make intuitively? If the human moral sense is an unconscious computational mechanism of some sort, as many cognitive scientists have suggested, then the answer should be yes. So too if the search for reflective equilibrium is a sound enterprise, since achieving this state of affairs requires demarcating a set of considered judgments, stating them as explanandum sentences, and formulating a set of algorithms from which they can be derived. The same is true for theories that emphasize the role of emotions or heuristics in moral cognition, since they ultimately depend on intuitive appraisals of the stimulus that accomplish essentially the same tasks. Drawing on deontic logic, action theory, moral philosophy, and the common law of tort, particularly Terry's five-variable calculus of risk, I outline a formal model of moral grammar and intuitive jurisprudence along the foregoing lines, which defines the abstract properties of the relevant mapping and demonstrates their descriptive adequacy with respect to a range of common moral intuitions, which experimental studies have suggested may be universal or nearly so. Framing effects, protected values, and implications for the neuroscience of moral intuition are also discussed

Articulo aqui.

Olor y sexo.

La opinion cientifica asumida es que en los vertebrados, incluidos los seres humanos, la secrecion de las hormonas gonadales (testosterona, estrogenos...) durante una fase critica del desarrollo embrionario de lugar a la formacion de sistemas neuronales responsables de la conducta sexualmente dimorfica.

Nuevos avances geneticos y moleculares parecen apuntar que esto es una fotografia parcial y que las hormonas gonadales no son el unico factor en determinar el comportamiento sexual.

La informacion sensorial es critica a la hora de preparar las rutas neuronales y los sitemas de control del comportamiento sexual, y en concreto, un tipo de informacion sensorial: el olor.

La neurocientifica molecular Chaterine Dulac de la Universidad de Harvard ha demostrado con ratones de laboratorio como bloqueando un gen del sistema olfativo hace que ratones hembra tengan comportamientos sexuales masculinos.

La pregunta ahora es: el olor, el sistema olfativo, la nariz, juega un papel similar en los seres humanos.

Los roedores ademas de un sistema olfativo principal, al igual que los seres humanos, tambien poseen un sistema olfativo secundario, el organo vomeronasal envuelto en la percepcion de feromonas, la informacion sensorial necesaria para desencadenar el repertorio de comportamientos sexuales (y sociales).

El organo vomeronasal se considera atrofiado en los seres humanos, pero todavia se debate en la comunidad cientifica la presencia de esta estructura en los seres humanos, luego es posible que el olor juege un papel mucho mas importante del que se pensaba en el comportamiento sexual humano.

Pincha aqui.

Tuesday, August 12, 2008

Hito nanotecnologico.

Hace unos meses escribi una entrada acerca de los avances nanotecnologicos en la construccion de metamateriales.

Parece ser que en esta rama donde se conjuga fisica, ingenieria, ciencia de los materiales, optica, electronica... los avances se dan a pasos agigantados.

Hoy sabemos que una estructura metamaterial microscopica es capaz de tener un indice de refraccion luminica negativa, en otras palabras, es invisible al ojo humano.

Pincha aqui.

SCIENCE AT THE OLYMPICS:Can Neuroscience Provide a Mental Edge?


Science 1 August 2008:
Vol. 321. no. 5889, pp. 626 - 627
DOI: 10.1126/science.321.5889.626b

Greg Miller.

For Olympic athletes, physical strength, speed, and stamina are a given. But when elite competitors go head to head, it can be the mind as much as the muscles that determines who wins. A collaboration between sports psychologists and cognitive neuroscientists is trying to figure out what gives successful athletes their mental edge.

One focus is why some athletes rebound better than others after a poor performance. Even at the Olympic level, it's not uncommon for an athlete to blow a race early in a meet and then blow the rest of the meet, says Hap Davis, the team psychologist for the Canadian national swim team. To investigate why--and what might be done about it--Davis teamed up with neuroscientists including Mario Liotti at Simon Fraser University in Burnaby, Canada, and Helen Mayberg at Emory University in Atlanta, Georgia.
The researchers used functional magnetic resonance imaging (fMRI) to monitor brain activity in 11 swimmers who'd failed to make the 2004 Canadian Olympic team and three who made the team but performed poorly. The researchers compared brain activity elicited by two video clips: one of the swimmer's own failed race and a control clip featuring a different swimmer. Watching their own poor performance sparked activity in emotional centers in the brain similar to that seen in some studies of depression, the researchers reported in June in Brain Imaging and Behavior. Perhaps more tellingly, the researchers found reduced activity in regions of the cerebral cortex essential for planning movements. Davis speculates that the negative emotions stirred up by reliving the defeat may affect subsequent performances by inhibiting the motor cortex.

Davis and neuroscientist Dae-Shik Kim at Boston University (BU) School of Medicine are now using diffusion tensor imaging to visualize the connections between emotion and motor-planning brain regions. Kim hypothesizes that these connections might differ in athletes who are better able to shake off a bad performance. So far his team has scanned about a dozen BU athletes. Meanwhile, Davis and collaborators have been looking for interventions that would perk up the motor cortex. Additional fMRI studies, as yet unpublished, suggest that positive imagery--imagining swimming a better race, for example--boosts motor cortex activity, even when athletes see a videotaped failure. Jumping exercises have a similar effect, Davis says.

The work has already changed the Canadian team's poolside strategy, he says: "We pick up on [any negativity] right away and intervene." Davis has the swimmers review a video of a bad performance within half an hour and think about how they would fix it. Anecdotally, it seems to be working, he says. "We're seeing more people turn it around."

The fMRI findings suggest that quick, positive intervention helps athletes bounce back, says Leonard Zaichkowsky, a sports psychologist at BU who collaborates with Davis and Kim. But coaches often take a different approach with athletes. "Typically what happens is they've got hard-assed coaches reaming them out for a bad performance," he says. "It's the opposite of what they should be doing."

Saturday, August 09, 2008

Autobiografia de Daniel Dennett (1ª Parte).


What makes a philosopher? In the first of a two-part mini-epic, Daniel C. Dennett contemplates a life of the mind – his own.

Part 1: The pre-professional years.

It came as a pleasant surprise to me when I learned – around age twelve or thirteen – that not all the delicious and unspeakable thoughts of my childhood had to be kept private. Some of them were called 'philosophy', and there were legitimate, smart people who discussed these fascinating topics in public. While less immediately exciting than some of the other, still unspeakable, topics of my private musings, they were attention-riveting, and they had an aura of secret knowledge. Maybe I was a philosopher. That's what the counselors at Camp Mowglis in New Hampshire suggested, and it seemed that I might be good at it.

My family didn't discourage the idea. My mother and father were both the children of doctors, and both had chosen the humanities. My mother, an English major at Carleton College in Minnesota, went on for a Masters in English from the University of Minnesota, before deciding that she simply had to get out of Minnesota and see the world. Never having been out the Midwest, and bereft of any foreign languages, she took a job teaching English at the American Community School in Beirut. There she met my father, Daniel C. Dennett Jr, working on his PhD in Islamic history at Harvard while teaching at the American University of Beirut. His father, the first Daniel C. Dennett, was a classic small town general practitioner in Winchester, Massachusetts, the suburb of Boston where I spent most of my childhood. So yes, I am Daniel C. Dennett III; but since childhood I've disliked the Roman numerals, and so I chose to court confusion among librarians (how can DCD Jr be the father of DCD?) instead of acquiescing in my qualifier.

My father's academic career got off to a fine start, with an oft-reprinted essay, 'Pirenne and Muhammed', which I was thrilled to find on the syllabus of a history course I took as an undergraduate. His first job was at Clark University. When World War II came along, he put his intimate knowledge of the Middle East to use as a secret agent in the OSS, stationed in Beirut. He was killed on a mission, in an airplane crash in Ethiopia in 1947, when I was five. So my mother and two sisters and I moved from Beirut to Winchester, where I grew up in the shadow of everybody's memories of a quite legendary father. In my youth some of my friends were the sons of eminent or even famous professors at Harvard or MIT, and I saw the toll it took on them as they strove to be worthy of their fathers' attention. I shudder to think of what would have become of me if I had had to live up to my own father's actual, living expectations and not just to those extrapolated in absentia by his friends and family. As it was, I was blessed with the bracing presumption that I would excel, and few serious benchmarks against which to test it. It was assumed by all that I would eventually go to Harvard and become a professor – of one humanities discipline or another. The fact that from about the age of five I was fascinated with building things, taking things apart, repairing things, never even prompted the question of whether I might want to become an engineer – a prospect in our circle about as remote as becoming a lion tamer. I might become an artist – a painter, sculptor or musician – but not an engineer.

In my first year in Winchester High School I had two wonderful semesters of ancient history, taught by lively, inspiring interns from the Harvard School of Education. I poured my heart into a term paper on Plato, with a drawing of Rodin's Thinker on the cover. Deep stuff, I thought; but the fact was that I hardly understood a word of what I read for it. More important, really, was that I knew then – thank you, Catherine Laguardia and Michael Greenebaum wherever you are – that I was going to be a teacher. The only question was, what subject?

I spent my last two years of high school at Phillips Exeter Academy, largely because my father's old friends persuaded my mother that this was obligatory for the son of DCD Jr. Thank you, long-departed friends. There I was immersed in a wonderfully intense intellectual stew, where the editor of the literary magazine had more cachet than the captain of the football team; where boys read books that weren't on the assigned reading; where I learned to write (and write, and write, and write). My Olivetti Lettera portable typewriter (just like Michael Greenebaum's – cool!) churned out hundreds of pages over two years, but none of it was philosophy yet.

As much to upset the family's expectations as for any other reason, I eschewed Harvard for Wesleyan University, and arrived with advanced placement in math and English, having had excellent teachers in both areas at Exeter. I didn't want to go on in calculus, but they twisted my arm to take an advanced math course, under the mistaken idea that I was some sort of mathematical prodigy. I acquiesced, signing up for something called 'Topics in Modern Mathematics', taught by a young lecturer from Princeton, the logician Henry Kyburg in his first job. Since I and a grad student in the math department were the only two students enrolled in the course, Henry asked and got our permission to make it a course in mathematical logic. He promptly immersed us in Quine's Mathematical Logic, followed by Kleene, Ramsey, and even Wittgenstein's Tractatus, among other texts. Quite a first course in logic for a seventeen year-old! If I had been a mathematical prodigy, as advertised, this would no doubt have made pedagogical sense; but I was soon gasping for air and in danger of drowning. Freshman year was turning out to be more challenging than I had expected.

One night as I crammed in the math library, I took a breather and scouted out the shelves. Quine's From a Logical Point of View caught my eye, and I sat down to sample it. By breakfast I had finished my first of several readings of it, and made up my mind to transfer to Harvard. This Quine person was very, very interesting – but wrong. I couldn't yet say exactly how or why, but I was quite sure. So I decided, as only a freshman could, that I had to confront him directly and see what I could learn from him – and teach him! A reading of Descartes' Meditations in my first philosophy course, with Louis Mink, not only confirmed my conviction that I had discovered what it was I was going to teach, but narrowed the field considerably: philosophy of mind and language transfixed my curiosity.

When I showed up at Harvard in the fall of 1960, the first course I signed up for was Quine's philosophy of language course, and the main text was his brand new book, Word and Object. Perfect timing. I devoured the course, and was delighted to find that the other students in the class were really quite as good as I had hoped Harvard students would be. Most were grad students; among them (if memory serves) were David Lewis, Tom Nagel, Saul Kripke, Gil Harman, Margaret Wilson, Michael Slote, David Lyons. A fast class.

When it came to the final exam I had never been so well prepared, with As on both early papers, and every reading chewed over and over. But I froze. I knew too much, had thought too much about the problems and could see, I thought, way beyond the questions posed – too far beyond to enable any answer at all. Quine's teaching assistant, Dagfinn Follesdal, must have taken pity on me, for I received a B- in the course. Follesdal also agreed to be my supervisor when two years later I told him that I'd been working on my senior thesis, 'Quine and Ordinary Language' ever since I'd taken the course. I didn't want Quine to supervise me, since he'd probably show me I was wrong before I got a chance to write it out, and then where would I be? I had sought Quine out, however, for bibliographical help, asking him to direct me to the best anti-Quinians. I needed all the allies I could find. He directed me to Chomsky's Syntactic Structures, the first of Lotfi Zadeh's papers on fuzzy logic, and Wittgenstein's Philosophical Investigations, which I devoured in the summer of 1962, while on my honeymoon job as a sailing and tennis instructor at Salter's Point, a family summer community in Buzzards Bay (my bride, Susan, was the swimming instructor). 1962-3, my senior year at Harvard, was exciting but far from carefree – I was now a married man at the age of 20, and I had to complete my four-year project to Refute Quine, who was very, very interesting but wrong. Freed from the diversions and distractions of student life, I worked with an intensity I have seldom experienced. I can recall several times reflecting that it really didn't matter in the larger scheme of things whether I was right or wrong: I was engulfed in doing exactly what I wanted to be doing, pursuing a valuable quarry through daunting complexities, and figuring out for myself answers to some of the most perplexing questions I'd ever encountered. Dagfinn, bless his heart, knew enough not to try to do more than gently steer me away from the most dubious overreachings in my grand scheme. I was not strictly out of control, but I was beyond turning back.

The thesis was duly typed up in triplicate and handed in (by a professional typist, back in those days before word-processing). I anxiously awaited the day when Quine and young Charles Parsons, my examiners, would let me know what they made of it. Quine showed up with maybe half a dozen single-spaced pages of comments. I knew at that moment that I was going to be a philosopher. (I was also an aspiring sculptor, and had shown some of my pieces in exhibits and competitions in Boston and Cambridge. Quine had taken a fancy to some of my pieces and always remarked positively on them whenever we met, so I had been getting equivocal signals from my hero – was he really telling me to concentrate on sculpture?) On this occasion Quine responded to my arguments with the seriousness of a colleague, conceding a few crucial points (hurrah!) and offering counter-arguments to others (just as good, really). Parsons sided with me on a point of contention. I can't remember what it was, but I was mightily impressed that he would join David against Goliath. The affirmation was exhilarating. Maybe I really was going to be a philosopher.

But if so, I was going to be a rather different philosopher from those around me. I had no taste for much that delighted my Harvard classmates or the graduate students. Ryle's Concept of Mind was one of the few contemporary books in philosophy that I actually liked. (Another was Stephen Toulmin's The Place of Reason in Ethics, which seems to have vanished without a trace, whereas I thought it was clearly superior to the other readings in my ethics courses.) I couldn't see why others found Ryle so unpersuasive. To me, he was obviously and refreshingly right about something deep, in spite of various overstatements and baffling bits. I decided that Ryle would make a logical next step in my education, so I applied to Oxford, to read for the notoriously difficult B.Phil degree. Burton Dreben tried to dissuade me – now that Austin had died, he assured me, there was nobody, really, in Oxford with whom to study. I also applied to Berkeley, though I can't remember why. And I applied to Harvard, but Harvard wisely had a policy of not admitting their own graduates, and I treasured the letter of rejection I got from the then Dean of Graduate Admissions, Nina Dennett: she signed it 'Aunt Nina', although she was a somewhat more distant relative. I also got rejected by all three Oxford colleges to which I had applied. Back then, they had no university-wide admissions system, and I had applied, as it turned out, to three of the most popular colleges among Rhodes and Marshall scholars: Balliol, Magdalen and University. They were oversubscribed with Americans with scholarships and had no room for me, even though I would be paying for myself with a modest legacy from DCD the first, who had died a few years earlier. But just as I was about to send Berkeley my downpayment to reserve a married student apartment for the fall term, out of the blue I received a letter from the Principal of Hertford College, Oxford, telling me that they were prepared to admit me to read for the B.Phil in philosophy. I had not applied to Hertford, and in fact had never even heard of it, and at first I suspected that somebody who knew of my disappointment was playing an evil prank on me. I looked up Hertford College in the Oxford University Bulletin, confirmed its reality, and accepted. It didn't matter which college I was in, reading for the B.Phil: my supervisor would be one of the professors – Ryle, Ayer or Kneale – and I figured that I would almost certainly be able to work with Ryle, although his name hadn't come up in my correspondence with Hertford. Years later, Ryle told me that he'd been on the admissions committee at Magdalen and read Quine's letter of recommendation. Magdalen couldn't fit me in, so he'd sent the application with a little note to a friend in Hertford, where they were eager to get a few American grad students. So I owed more than I guessed to both my mentors.

My wife and I sailed to England in the summer of 1963. I carried with me an idea I had had about qualia, as philosophers call the phenomenal qualities of experiences, such as the smell of coffee or the 'redness' of red. In my epistemology course at Harvard with Roderick Firth, I had had what struck me as an important insight – obvious to me but strangely repugnant to those I had tried it out on. I claimed that what was caused to happen in you when you looked at something red only seemed to be a quale – a homogeneous, unanalyzable, self-intimating 'intrinsic' property. Subjective experiences of color, for instance, couldn't actually owe the way they seemed to their intrinsic properties; their intrinsic properties could in principle change without any subjective change; what mattered for subjectivity were properties that were – I didn't have a word for it then– functional, relational. The same was going to be true of [mental] content properties in general, I thought. The meaning of an idea, or a thought, just couldn't be a self-contained, isolated patch of psychic paint (what I later jocularly called 'figment'); it had to be a complex dispositional property – a set of behavior-guiding, action-prompting triggers. This idea struck me as congenial with, if not implied by, what Ryle was saying. But when I got to Oxford, I found that these ideas seemed even stranger to my fellow graduate students at Oxford than at Harvard.

This was already beyond the heyday and into the decline of 'ordinary language philosophy', but thanks to the lamentable phenomenon of philosophical hysteresis (graduate students tend to crowd onto bandwagons just as they grind to a halt), Oxford was enjoying total domination of Anglophone philosophy. It was a swarming Mecca for dozens – maybe hundreds – of pilgrims from the colonies who wanted to take the cloth and learn the moves. There was the Voltaire Society and the Ockham Society, just for graduate students. At one of their meetings in my first term, in the midst of a discussion of Anscombe's Intention, as I recall, the issue came up of what to say about one's attempts to raise one's arm when it had gone 'asleep' from lying on it. At the time I knew nothing about the nervous system, but it seemed obvious to me that something must be going on in one's brain that somehow amounted to trying to raise one's arm, and it might be illuminating to learn what science knew about this. My suggestion was met with incredulous stares. What on earth did science have to teach philosophy? This was a philosophical puzzle about 'what we would say', not a scientific puzzle about nerves and the like. This was the first of many encounters in which I found my fellow philosophers of mind weirdly complacent in their ignorance of brains and psychology, and I began to define my project as figuring out as a philosopher how brains could be, or support, or explain, or cause, minds. I asked a friend studying medicine at Oxford what brains were made of, and vividly remember him drawing simplified diagrams of neurons, dendrites, axons – all new terms to me. It immediately occurred to me that a neuron, with multiple inputs and a modifiable branching output, would be just the thing that could compose into networks which could learn by a sort of evolutionary process. Many others have had the same idea, of course, before and since. Once you get your head around it, you see that this really is the way – probably, in the end, the only way – to eliminate the middleman, the all-too-knowing librarian or clerk or homunculus who manipulates the ideas or mental representations, sorting them by content.

With this insight driving me, I began to see how to concoct something of a 'centralist' theory of intentionality. (This largely unexamined alternative was suggested by Charles Taylor in his pioneering book, The Explanation of Behaviour in 1964.) The failure of Skinnerian and Pavlovian 'black box' behaviorism to account for human and animal behavior purely in the 'extensional' terms of histories of stimulus and response suggested that we needed to justify a non-extensional, 'intensional' (with an 's') theory of intentionality (with a 't'): a theory that looked inside at the machinery of mind and explained how internal states and events could be about things, and thereby motivate the mental system of which they were a part to decide on courses of action. [see box on p.24] The result would be what would later be called a functionalist, and then teleofunctionalist, theory of content, in which Brentano and Husserl (thank you, Dagfinn) and Quine could all be put together, but at the subpersonal level. The personal/subpersonal distinction was my own innovation, driven by my attempts to figure out what on earth Ryle was doing and how he could get away with it. It is clear that my brain doesn't understand English – I do – and my hand doesn't sign a contract – I do. But it is also clear that I don't interpret the images on my retinas, and I don't figure out how to make my fingers grasp the pen. We need the subpersonal level of explanation to account for the remarkably intelligent components of me that do the cognitive work that makes it possible for me to do clever things. In order to understand this subpersonal level of explanation, I needed to learn about the brain; so I spent probably five times as much energy educating myself in Oxford's Radcliffe Science Library as I did reading philosophy articles and books.

I went to Ryle, my supervisor, to tell him that I couldn't possibly succeed in the B.Phil, which required one to submit a (modest) thesis and take three very tough examinations in the space of a few weeks at the end of one's second year. As I have already mentioned, I was an erratic examination-taker under the best of conditions, and I was consumed with passion to write my thesis. I knew to a moral certainty that I would fail at least one of the examinations simply because I couldn't make myself prepare for it while working white hot on the thesis. I proposed to switch to the B.Litt; a thesis-only degree that would let me concentrate on the thesis and then go off to Berkeley for a proper PhD. To my delight and surprise, Ryle said that I might have to settle for a B.Litt as a consolation prize of sorts, but that he was prepared to recommend me for the D.Phil, which also required just a thesis. With that green light, I was off and running, but the days of inspiration were balanced by weeks and months of confusion, desperation and uncertainty. A tantalizing source of alternating inspiration and frustration was Hilary Putnam, whose 'Minds and Machines' (1960) I had found positively earthshaking. I set to work feverishly to build on it in my own work, only to receive an advance copy of Putnam's second paper on the topic, 'Robots: Machines or Artificially Created Life?' from my mole back at Harvard (it was not published until 1967). This scooped my own efforts and then some. No sooner had I recovered and started building my own edifice on Putnam paper number two than I was spirited a copy of Putnam paper number three, 'The Mental Life of Some Machines' (eventually published in 1967) and found myself left behind yet again. So it went. I think I understood Putnam's papers almost as well as he did – which was not quite well enough to see farther than he could what step to take next. Besides, I was trying to put a rather different slant on the whole topic, and it was not at all clear to me that, or how, I could make it work. Whenever I got totally stumped, I would go for a long, depressed walk in the glorious Parks along the River Cherwell. Marvelous to say, after a few hours of tramping back and forth with my umbrella, muttering to myself and wondering if I should go back to sculpture, a breakthrough would strike me and I'd dash happily back to our flat and my trusty Olivetti for another whack at it. This was such a reliable source of breakthroughs that it became a dangerous crutch; when the going got tough, I'd just pick up my umbrella and head out to the Parks, counting on salvation before suppertime.

Gilbert Ryle himself was the other pillar of support I needed. In many regards he ruled Oxford philosophy at the time, as editor of Mind and informal clearing-house for jobs throughout the Anglophone world, but at the same time he stood somewhat outside the cliques and coteries, the hotbeds of philosophical fashion. He disliked and disapproved of the reigning Oxford fashion of clever, supercilious philosophical one-upmanship, and disrupted it when he could. He never 'fought back'. In fact, I tried to provoke him, with elaborately-prepared and heavily-armed criticisms of his own ideas, but he would genially agree with all my good points as if I were talking about somebody else, and get us thinking what repairs and improvements we could together make of what remained. It was disorienting, and my opinion of him then – often expressed to my fellow graduate students, I am sad to say – was that while he was wonderful at cheering me up and encouraging me to stay the course, I hadn't learned any philosophy from him.

I finished a presentable draft of my dissertation in the minimum time (six terms or two years) and submitted it with scant expectation that it would be accepted on first go. On the eve of submitting it, I came across an early draft of it, and compared the final product with its ancestor. To my astonishment, I could see Ryle's influence on every page. How had he done it? Osmosis? Hypnotism? This gave me an early appreciation of the power of indirect methods in philosophy. You seldom talk anybody out of a position by arguing directly with their premises and inferences. Sometimes it is more effective to nudge them sideways with images, examples, helpful formulations that stick to their habits of thought. My examiners were A.J. Ayer and the great neuroanatomist J.Z. Young from London – an unprecedented alien presence at a philosophy viva, occasioned by my insistence on packing my thesis with speculations on brain science. He too had been struck by the idea of learning as evolution in the brain, and was writing a book on it, so we were kindred spirits on that topic, if not on the philosophy, which he found intriguing but impenetrable. Ayer was reserved. I feared he had not read much of the thesis, but I later found out he was simply made uncomfortable by his friend Young's too-enthusiastic forays into philosophy, and he found silence more useful than intervention. I waited in agony for more than a week before I learned via a cheery postcard from Ryle that the examiners had voted me the degree.

Since I had the degree, I wouldn't need to go to U.C. [University of California] Berkeley after all. So on a wonderful day in May 1965, a few weeks after my 23rd birthday, I sent off two letters to California: I accepted an Assistant Professorship at U.C. Irvine, where A.I. Melden was setting up a philosophy department in a brand new campus of the university; and I declined a Teaching Assistantship at U.C. Berkeley, saying only that I had found another position. I didn't dare say that it was a tenure track position at a sister campus! I was a little worried that there might be some regulations of the University of California prohibiting this sort of thing, whatever sort of thing it was. Ah, those were the glorious expansionist days in American academia, when it was a seller's market in jobs, and I had garnered two solid offers and a few feelers without so much as an interview, let alone a campus visit and job talk. For formality's sake, Melden asked me to send a curriculum vitae along with my official acceptance letter, and I had to ask around Oxford to find out what such an obscure document might be.

© Prof. Daniel C. Dennett 2008

Dan Dennett is Co-Director of the Center for Cognitive Studies and is Austin B. Fletcher Professor of Philosophy at Tufts University. His latest book is Breaking the Spell (Viking, 2006).

Kwabena Boahen sobre neuroprotesis y neuroingenieria.

Friday, August 08, 2008

Paleoeconomía.


La paleoeconomía es el área de estudio que trata de (re)construir el comportamiento humano en la prehistoria analizando los mecanismos de trasmisión de las ideas como el aprendizaje, la imitación, los medios físicos para dicha trasmisión de ideas como el "comercio" y los lugares públicos para las transacciones de materias primas o bienes (y la cultura en general), así como el desarrollo de la tecnología (domesticación de las plantas: agricultura y domesticación de los animales: ganadería); ademas de los atributos mentales que hace todo esto posible, como la confianza, la cooperación, la detección de regularidades en el entorno, la abstracción, o el sentido de la equidad o justicia etc.

Uno de los grandes misterios y enigmas que trata de resolver la peloeoconomía junto con las subdisciplinas que conforman esta gran confederación interdisciplinar (economía, economía comportamental, economía experimental, neuroeconomía, arqueología, paleontología, antropologia, filosofía, estudios clásicos, historia, biopolítica, sociobiología... son algunas de las disciplinas que convergen en la paleoeconomía) es el hecho de qué es lo que produjo que el hombre de Neanderthal sucumbiera al hombre moderno.

Varias teorías apuntan a una exclusión competitiva, un cuello de botella, donde dos especies coexisten y compiten por recursos escasos en un entorno geográfico hostil e incierto, pero estable en sus irregularidades, que conduce a que solo el mejor adaptado sobreviva y el otro se extinga.

El destino fatal del Neanderthal que desapareció abruptamente hace unos 30.000 o 40.000 años ha cautivado (y preocupado)a los científicos desde hace mucho tiempo.

Cómo es posible que una especie que poblo centro-europa y el oeste asiático adaptandose exitosamente a los climas glaciares, de repente se extingiera.

Pero, la hipotesis de la exclusion competitiva no es factible si tenemos en cuenta uno de los modelos sobre los origenes del hombre moderno, el modelo "Out of Africa" que expone que un hombre anatomicamente moderno que evoluciona biologica y comportalmente en Africa emigro al resto del mundo para reemplazar al resto de especies.

Si esto fuera correcto,que muy probablemente lo es dada las evidencias, los científicos, y en particular los paleoeconomistas, deben echar mano de nuevas formas de pensar sobre las diferencias de comportamiento de los Neanderthales en comparación con el hombre moderno.

Sin lugar a dudas, en este esfuerzo de pensar sobre qué tipo de modelo comportamental es válido para explicar el hecho del colapso del Neanderthal y el triunfo del hombre anatómicamente moderno, los nuevos enfoques que combinan economía con neurociencia y ciencia cognitiva tienen mucho que decir.

Son estos enfoques los que han permitido entender los mecanismos cerebrales del sentido de la justicia o equidad, la cooperación y la confianza necesarios para el éxito a la hora de la domesticación de la tierra y los animales, así como la interacción eficiente de unos con otros.

Y esto lo han conseguido a través de la formalización del comportamiento humano estratégico con la "teoría de juegos" y sus herramientas, como con las tecnicas no-invasivas de neuroimagen para visualizar las áreas responsables de estos comportamientos y elaborar teorías válidas.

Hasta que no llego Jared Diamond con su trabajo expuesto en varios libros, entre ellos Guns, Germs and Steel, donde se enfatiza el determinismo geográfico junto con las habilidades para domesticar las plantas silvestres y los animales salvajes, todas las teorías anteriores coincidían en el principio biológico de la exclusión competitiva, o en poner sobre el tapete teorías racistas o de superioridad intelectual para explicar el diferente desarrollo de los pueblos, las culturas, y para este caso, las especies.

Sin embargo, estas explicaciones son muy simples, demasiado simples, y además erroneas.

Lo que verdaderamente produjo la extincion del Neanderthal y el triunfo del hombre moderno fue la capacidad del hombre anatómicamente moderno de cooperar, ponerse de acuerdo y empatizar con el otro ("teoria de la mente" o la habilidad para interpretar las intenciones, deseos y crencias del otro), la division del trabajo y el comercio de bienes y productos que requiere no solo la "teoría de la mente", sino también un sentido de la justicia o equidad y la confianza (cómo sino se podrían realizar transacciones ecónomicas que lleven al desarrollo de un pueblo, sociedad o civilización).

Gracias a la moralidad, a la capaciad para ponerse en el lugar del otro, es por lo que estamos aquí.

En otras palabras, gracias al Homo Moralis que empezó a hacer economía a gran escala, el ser humano ha llegado donde ha llegado.

El paleoeconomista Richard Horan lo formula así en un artículo reciente:

Resumen.

One of the great puzzles in science concerns the rise of early modern humans and the fall of Neanderthals. A number of theories exist and many support the biological principle ofcompetitive exclusion: if two similar species occupy exactly the same niche, only the most efficient will survive; the other will go extinct. Such ideas of biological efficiency pertain tobiological or physiological factors like lower mortality rates or greater efficiency in hunting. Evidence for such mechanistic theories in which biology is destiny, however, is limited. Inresponse, this paper develops a behavioral model of Neanderthal extinction. We show how thedivision of labor and subsequent trading among early modern humans could have helped them toovercome potential biological deficiencies, and therefore lead to the demise of Neanderthals .

Articulo aquí.

Thursday, August 07, 2008

La mujer atractiva lo quiere todo.

Los seres humanos poseen una estrategia plural y eclectica dependiente del contexto a la hora de buscar pareja.

El hombre tiende a ser mas promiscuo que la mujer porque la produccion de espermatozoides sale mas barata que la produccion de ovulos, y ademas el cuidado posterior del niño siempre, a pesar de la correccion politica que habla de compartir las responsabilidades, hace que tenga por imperativo biologico una mayor carga para la mujer (amamantamiento, lazos de apego [contacto corporal y exploracion] etc.)

Es por esta razon que hay diferencias en la estrategia de busqueda de pareja segun el sexo, porque los costes evolutivos son distintos para los sexos.


Dependiendo de la condicion fisica (atractivo) un sujeto tendra un valor u otro como potencial pareja en el "mercado" y juego de la seduccion y el cortejo.

Pero el atractivo fisico no es el unico factor a tener en cuenta en la eleccion de pareja, el compromiso emocional en la relacion, la proclividad al cuidado de los hijos o inversion parental... asi como el estatus socioeconomico, son todas ellas variables a tener en cuenta a la hora de computar la decision biologica de elegir pareja.

Para el caso de las mujeres, el sexo que verdaderamente elige, el ciclo de la menstruacion (una mujer en la fase folicular del ciclo menstrual, donde es mas probable la concepcion, tiende a elegir parejas con "buenos genes": la hipotesis que sugiere que los indicadores fisicos de salud, atractivo [simetria facial] e incluso actitudes parentales inferidas a partir de las "caracteristicas sexuales secundarias" en el rostro de su potencial pareja como el menton, barbilla, pelo... que señalizan virilidad o masculinidad), sus propias caracteristicas fisicas y socioeconomicas, asi como las de su potencial pareja, son las variables decisivas a la hora de elegir.

Cuando hablamos de la eleccion de pareja de una mujer atractiva (valorada como atractiva por el sexo opuesto) esta lo quiere todo, segun el psicologo evolucionista David Buss y colaboradores: buenos genes, inversion parental, estatus socioeconomico y compromiso emocional en la relacion.

Abstract.
The current research tests the hypothesis that women have an evolved mate value calibration adaptation that functions to raise or lower their standards in a long-term mate according to their own mate value. A woman’s physical attractiveness is a cardinal component of women’s mate value. We correlated observer-assessed physical attractiveness (face, body, and overall) with expressed preferences for four clusters of mate characteristics (N = 214): (1) hypothesized good-gene indicators (e.g., masculinity, sexiness); (2) hypothesized good investment indicators (e.g., potential income); (3) good parenting indicators (e.g., desire for home and children), and (4) good partner indicators (e.g., being a loving partner). Results supported the hypothesis that high mate value women, as indexed by observer-judged physical attractiveness, expressed elevated standards for all four clusters of mate characteristics. Discussion focuses on potential design features of the hypothesized mate-value calibration adaptation, and suggests an important modification of the trade-off model of women’s mating. A minority of women—notably those low in mate value who are able to escape male mate guarding and the manifold costs of an exposed infidelity—will pursue a mixed mating strategy, obtaining investment from one man and good genes from an extra-pair copulation partner (as the trade-off model predicts). Since the vast majority of women secure genes and direct benefits from the same man, however, most women will attempt to secure the best combination of all desired qualities from the same man.

Articulo aqui.

Cita del dia.

"Nunca somos tan felices, ni tan infelices como pensamos."
-La Rochefoucald-

Tuesday, August 05, 2008

Marihuana y cerebro.



La adiccion es una patologia de la motivacion, la valoracion y la decision y tiene repercusiones a nivel neuronal, sinaptico, receptores de membrana, bioquimico intracelular, expresion genetica y comportamental con graves consecuencias para el individuo y la sociedad.

Cualquier droga de abuso "secuestra" los sistemas de procesamiento de la recompensa normalmente utilizados por el cerebro como señales guia para la toma de decisiones y la implementacion de objetivos, sometiendo al sujeto a una espiral de "desordenes de la valoracion" por utilizar la frase de P. Read Montague.

Para el caso concreto de la marihuana, una planta herbacea que se utiliza como psicoactivo (debido al principio activo denominado delta- 9 tetrahidrocanabinol) ya sea mediante la inhalacion del humo de las hojas secas de la flor, o en su forma de polen prensado, el conocido hachis o "cannabis", se sabe que esta afecta a los sistemas de procesamiento de la emocion, la memoria y el juicio.

El cannabis es la droga ilicita mas consumida entre la juventud y la edad adulta temprana.

Tambien, se sabe que ademas de afectar a la memoria a corto plazo, el uso de la marihuana bloquea la consolidacion de la memoria a largo y plazo y perjudica la habilidad para la resolucion de problemas.

Como tendencia global hay un incremento en el uso de este psicoactivo en la adolescencia, una fase critica para el desarrollo del cerebro, y su consumo o uso durante esta fase de la vida, tiene consecuencias neurobiologicas a largo plazo a pesar de que haya pocos estudios, y a veces, sean contradictorios.

La literatura pertinente sobre el uso de la marihuana durante la adolescencia es clara. La marihuana puede inducir cambios sutiles en la circuiteria cerebral responsable del procesamiento emocional, la cognicion e incrementa la susceptibilidad hacia el consumo de substancias mas toxicas, sin olvidar que es un factor de alto riesgo para la psicosis y la esquizofrenia.

Esto no quita que podamos reconocer el potencial terapeutico de la marihuana para paliar ciertos sintomas asociados a enfermedades cronicas debido a la accion analgesica de la marihuana, como en el caso de tratamientos contra el cancer.

Por otra parte, el cerebro de forma natural produce un analogo endogeno al principio activo del cannabis, conocido como anandamide, luego aun mas razones para no tener que estimular externamente nuestra mente/cerebro para sentirse "stoned" o "high".

No obstante, no quiero moralizar gratuitamente y jamas prohibiria el uso de cualquier substancia. En esta vida tenemos que saber que es lo que queremos, siempre y cuando sea el momento oportuno, aquel en el que podamos ejercer de forma plena una decision madura, la adolescencia no lo es.


Pincha aqui.

Sunday, August 03, 2008

Por si alguien aun no lo tenia claro...

...ya en 1996 la pluma del escritor Tom Wolfe lo dejo zanjado:

SORRY, BUT YOUR SOUL JUST DIED.

Tom Wolfe.

From neuroscience to Nietzsche. A sobering look at how man may perceive himself in the future, particularly as ideas about genetic predeterminism takes the place of dying Darwinism.

Being a bit behind the curve, I had only just heard of the digital revolution last February when Louis Rossetto, cofounder of Wired magazine, wearing a shirt with no collar and his hair as long as Felix Mendelssohn's, looking every inch the young California visionary, gave a speech before the Cato Institute announcing the dawn of the twenty-first century's digital civilization. As his text, he chose the maverick Jesuit scientist and philosopher Pierre Teilhard de Chardin, who fifty years ago prophesied that radio, television, and computers would create a "noösphere," an electronic membrane covering the earth and wiring all humanity together in a single nervous system. Geographic locations, national boundaries, the old notions of markets and political processes--all would become irrelevant. With the Internet spreading over the globe at an astonishing pace, said Rossetto, that marvelous modem-driven moment is almost at hand.

Could be. But something tells me that within ten years, by 2006, the entire digital universe is going to seem like pretty mundane stuff compared to a new technology that right now is but a mere glow radiating from a tiny number of American and Cuban (yes, Cuban) hospitals and laboratories. It is called brain imaging, and anyone who cares to get up early and catch a truly blinding twenty-first-century dawn will want to keep an eye on it.

Brain imaging refers to techniques for watching the human brain as it functions, in real time. The most advanced forms currently are three-dimensional electroencephalography using mathematical models; the more familiar PET scan (positron-emission tomography); the new fMRI (functional magnetic resonance imaging), which shows brain blood-flow patterns, and MRS (magnetic resonance spectroscopy), which measures biochemical changes in the brain; and the even newer PET reporter gene/PET reporter probe, which is, in fact, so new that it still has that length of heavy lumber for a name. Used so far only in animals and a few desperately sick children, the PET reporter gene/PET reporter probe pinpoints and follows the activity of specific genes. On a scanner screen you can actually see the genes light up inside the brain.

By 1996 standards, these are sophisticated devices. Ten years from now, however, they may seem primitive compared to the stunning new windows into the brain that will have been developed.

Brain imaging was invented for medical diagnosis. But its far greater importance is that it may very well confirm, in ways too precise to be disputed, certain theories about "the mind," "the self," "the soul," and "free will" that are already devoutly believed in by scholars in what is now the hottest field in the academic world, neuroscience. Granted, all those skeptical quotation marks are enough to put anybody on the qui vive right away, but Ultimate Skepticism is part of the brilliance of the dawn I have promised.

Neuroscience, the science of the brain and the central nervous system, is on the threshold of a unified theory that will have an impact as powerful as that of Darwinism a hundred years ago. Already there is a new Darwin, or perhaps I should say an updated Darwin, since no one ever believed more religiously in Darwin I than he does. His name is Edward O. Wilson. He teaches zoology at Harvard, and he is the author of two books of extraordinary influence, The Insect Societies and Sociobiology: The New Synthesis. Not "A" new synthesis but "The" new synthesis; in terms of his stature in neuroscience, it is not a mere boast.

Wilson has created and named the new field of sociobiology, and he has compressed its underlying premise into a single sentence. Every human brain, he says, is born not as a blank tablet (a tabula rasa) waiting to be filled in by experience but as "an exposed negative waiting to be slipped into developer fluid." You can develop the negative well or you can develop it poorly, but either way you are going to get precious little that is not already imprinted on the film. The print is the individual's genetic history, over thousands of years of evolution, and there is not much anybody can do about it. Furthermore, says Wilson, genetics determine not only things such as temperament, role preferences, emotional responses, and levels of aggression, but also many of our most revered moral choices, which are not choices at all in any free-will sense but tendencies imprinted in the hypothalamus and limbic regions of the brain, a concept expanded upon in 1993 in a much-talked-about book, The Moral Sense , by James Q. Wilson (no kin to Edward O.).

The neuroscientific view of life

This, the neuroscientific view of life, has become the strategic high ground in the academic world, and the battle for it has already spread well beyond the scientific disciplines and, for that matter, out into the general public. Both liberals and conservatives without a scientific bone in their bodies are busy trying to seize the terrain. The gay rights movement, for example, has fastened onto a study published in July of 1993 by the highly respected Dean Hamer of the National Institutes of Health, announcing the discovery of "the gay gene." Obviously, if homosexuality is a genetically determined trait, like left-handedness or hazel eyes, then laws and sanctions against it are attempts to legislate against Nature. Conservatives, meantime, have fastened upon studies indicating that men's and women's brains are wired so differently, thanks to the long haul of evolution, that feminist attempts to open up traditionally male roles to women are the same thing: a doomed violation of Nature.

Wilson himself has wound up in deep water on this score; or cold water, if one need edit. In his personal life Wilson is a conventional liberal, PC, as the saying goes--he is , after all, a member of the Harvard faculty--concerned about environmental issues and all the usual things. But he has said that "forcing similar role identities" on both men and women "flies in the face of thousands of years in which mammals demonstrated a strong tendency for sexual division of labor. Since this division of labor is persistent from hunter-gatherer through agricultural and industrial societies, it suggests a genetic origin. We do not know when this trait evolved in human evolution or how resistant it is to the continuing and justified pressures for human rights."

"Resistant" was Darwin II, the neuroscientist, speaking. "Justified" was the PC Harvard liberal. He was not PC or liberal enough. Feminist protesters invaded a conference where Wilson was appearing, dumped a pitcher of ice water, cubes and all, over his head, and began chanting, "You're all wet! You're all wet!" The most prominent feminist in America, Gloria Steinem, went on television and, in an interview with John Stossel of ABC, insisted that studies of genetic differences between male and female nervous systems should cease forthwith.

But that turned out to be mild stuff in the current political panic over neuroscience. In February of 1992, Frederick K. Goodwin, a renowned psychiatrist, head of the federal Alcohol, Drug Abuse, and Mental Health Administration, and a certified yokel in the field of public relations, made the mistake of describing, at a public meeting in Washington, the National Institute of Mental Health's ten-year-old Violence Initiative. This was an experimental program whose hypothesis was that, as among monkeys in the jungle--Goodwin was noted for his monkey studies--much of the criminal mayhem in the United States was caused by a relatively few young males who were genetically predisposed to it; who were hardwired for violent crime, in short. Out in the jungle, among mankind's closest animal relatives, the chimpanzees, it seemed that a handful of genetically twisted young males were the ones who committed practically all of the wanton murders of other males and the physical abuse of females. What if the same were true among human beings? What if, in any given community, it turned out to be a handful of young males with toxic DNA who were pushing statistics for violent crime up to such high levels? The Violence Initiative envisioned identifying these individuals in childhood, somehow, some way, someday, and treating them therapeutically with drugs. The notion that crime-ridden urban America was a "jungle," said Goodwin, was perhaps more than just a tired old metaphor.

That did it. That may have been the stupidest single word uttered by an American public official in the year 1992. The outcry was immediate. Senator Edward Kennedy of Massachusetts and Representative John Dingell of Michigan (who, it became obvious later, suffered from hydrophobia when it came to science projects) not only condemned Goodwin's remarks as racist but also delivered their scientific verdict: Research among primates "is a preposterous basis" for analyzing anything as complex as "the crime and violence that plagues our country today." (This came as surprising news to NASA scientists who had first trained and sent a chimpanzee called Ham up on top of a Redstone rocket into suborbital space flight and then trained and sent another one, called Enos, which is Greek for "man," up on an Atlas rocket and around the earth in orbital space flight and had thereby accurately and completely predicted the physical, psychological, and task-motor responses of the human astronauts, Alan Shepard and John Glenn, who repeated the chimpanzees' flights and tasks months later.) The Violence Initiative was compared to Nazi eugenic proposals for the extermination of undesirables. Dingell's Michigan colleague, Representative John Conyers, then chairman of the Government Operations Committee and senior member of the Congressional Black Caucus, demanded Goodwin's resignation--and got it two days later, whereupon the government, with the Department of Health and Human Services now doing the talking, denied that the Violence Initiative had ever existed. It disappeared down the memory hole, to use Orwell's term.

A conference of criminologists and other academics interested in the neuroscientific studies done so far for the Violence Initiative--a conference underwritten in part by a grant from the National Institutes of Health--had been scheduled for May of 1993 at the University of Maryland. Down went the conference, too; the NIH drowned it like a kitten. Last year, a University of Maryland legal scholar named David Wasserman tried to reassemble the troops on the QT, as it were, in a hall all but hidden from human purview in a hamlet called Queenstown in the foggy, boggy boondocks of Queen Annes County on Maryland's Eastern Shore. The NIH, proving it was a hard learner, quietly provided $133,000 for the event but only after Wasserman promised to fireproof the proceedings by also inviting scholars who rejected the notion of a possible genetic genesis of crime and scheduling a cold-shower session dwelling on the evils of the eugenics movement of the early twentieth century. No use, boys! An army of protesters found the poor cringing devils anyway and stormed into the auditorium chanting, "Maryland conference, you can't hide--we know you're pushing genocide!" It took two hours for them to get bored enough to leave, and the conference ended in a complete muddle with the specially recruited fireproofing PC faction issuing a statement that said: "Scientists as well as historians and sociologists must not allow themselves to provide academic respectability for racist pseudoscience." Today, at the NIH, the term Violence Initiative is a synonym for taboo . The present moment resembles that moment in the Middle Ages when the Catholic Church forbade the dissection of human bodies, for fear that what was discovered inside might cast doubt on the Christian doctrine that God created man in his own image.

Even more radio-active is the matter of intelligence, as measured by IQ tests. Privately--not many care to speak out--the vast majority of neuroscientists believe the genetic component of an individual's intelligence is remarkably high. Your intelligence can be improved upon, by skilled and devoted mentors, or it can be held back by a poor upbringing--i.e., the negative can be well developed or poorly developed--but your genes are what really make the difference. The recent ruckus over Charles Murray and Richard Herrnstein's The Bell Curve is probably just the beginning of the bitterness the subject is going to create.

Not long ago, according to two neuroscientists I interviewed, a firm called Neurometrics sought out investors and tried to market an amazing but simple invention known as the IQ Cap. The idea was to provide a way of testing intelligence that would be free of "cultural bias," one that would not force anyone to deal with words or concepts that might be familiar to people from one culture but not to people from another. The IQ Cap recorded only brain waves; and a computer, not a potentially biased human test-giver, analyzed the results. It was based on the work of neuroscientists such as E. Roy John 1 , who is now one of the major pioneers of electroencephalographic brain imaging; Duilio Giannitrapani, author of The Electrophysiology of Intellectual Functions ; and David Robinson, author of The Wechsler Adult Intelligence Scale and Personality Assessment: Toward a Biologically Based Theory of Intelligence and Cognition and many other monographs famous among neuroscientists. I spoke to one researcher who had devised an IQ Cap himself by replicating an experiment described by Giannitrapani in The Electrophysiology of Intellectual Functions. It was not a complicated process. You attached sixteen electrodes to the scalp of the person you wanted to test. You had to muss up his hair a little, but you didn't have to cut it, much less shave it. Then you had him stare at a marker on a blank wall. This particular researcher used a raspberry- red thumbtack. Then you pushed a toggle switch. In sixteen seconds the Cap's computer box gave you an accurate prediction (within one-half of a standard deviation) of what the subject would score on all eleven subtests of the Wechsler Adult Intelligence Scale or, in the case of children, the Wechsler Intelligence Scale for Children--all from sixteen seconds' worth of brain waves. There was nothing culturally biased about the test whatsoever. What could be cultural about staring at a thumbtack on a wall? The savings in time and money were breathtaking. The conventional IQ test took two hours to complete; and the overhead, in terms of paying test-givers, test-scorers, test-preparers, and the rent, was $100 an hour at the very least. The IQ Cap required about fifteen minutes and sixteen seconds--it took about fifteen minutes to put the electrodes on the scalp--and about a tenth of a penny's worth of electricity. Neurometrics's investors were rubbing their hands and licking their chops. They were about to make a killing.

In fact-- nobody wanted their damnable IQ Cap!

It wasn't simply that no one believed you could derive IQ scores from brainwaves--it was that nobody wanted to believe it could be done. Nobody wanted to believe that human brainpower is... that hardwired . Nobody wanted to learn in a flash that... the genetic fix is in . Nobody wanted to learn that he was... a hardwired genetic mediocrity ...and that the best he could hope for in this Trough of Mortal Error was to live out his mediocre life as a stress-free dim bulb. Barry Sterman of UCLA, chief scientist for a firm called Cognitive Neurometrics, who has devised his own brain-wave technology for market research and focus groups, regards brain-wave IQ testing as possible--but in the current atmosphere you "wouldn't have a Chinaman's chance of getting a grant" to develop it.

Science is a court from which there is no appeal

Here we begin to sense the chill that emanates from the hottest field in the academic world. The unspoken and largely unconscious premise of the wrangling over neuroscience's strategic high ground is: We now live in an age in which science is a court from which there is no appeal. And the issue this time around, at the end of the twentieth century, is not the evolution of the species, which can seem a remote business, but the nature of our own precious inner selves.

The elders of the field, such as Wilson, are well aware of all this and are cautious, or cautious compared to the new generation. Wilson still holds out the possibility--I think he doubts it, but he still holds out the possibility--that at some point in evolutionary history, culture began to influence the development of the human brain in ways that cannot be explained by strict Darwinian theory. But the new generation of neuroscientists are not cautious for a second. In private conversations, the bull sessions, as it were, that create the mental atmosphere of any hot new science--and I love talking to these people--they express an uncompromising determinism.

They start with the most famous statement in all of modern philosophy, Descartes's "Cogito ergo sum," "I think, therefore I am," which they regard as the essence of "dualism," the old-fashioned notion that the mind is something distinct from its mechanism, the brain and the body. (I will get to the second most famous statement in a moment.) This is also known as the "ghost in the machine" fallacy, the quaint belief that there is a ghostly "self" somewhere inside the brain that interprets and directs its operations. Neuroscientists involved in three-dimensional electroencephalography will tell you that there is not even any one place in the brain where consciousness or self-consciousness ( Cogito ergo sum ) is located. This is merely an illusion created by a medley of neurological systems acting in concert. The young generation takes this yet one step further. Since consciousness and thought are entirely physical products of your brain and nervous system--and since your brain arrived fully imprinted at birth--what makes you think you have free will? Where is it going to come from? What "ghost," what "mind," what "self," what "soul," what anything that will not be immediately grabbed by those scornful quotation marks, is going to bubble up your brain stem to give it to you? I have heard neuroscientists theorize that, given computers of sufficient power and sophistication, it would be possible to predict the course of any human being's life moment by moment, including the fact that the poor devil was about to shake his head over the very idea. I doubt that any Calvinist of the sixteenth century ever believed so completely in predestination as these, the hottest and most intensely rational young scientists in the United States at the end of the twentieth.

Since the late 1970s, in the Age of Wilson, college students have been heading into neuroscience in job lots. The Society for Neuroscience was founded in 1970 with 1,100 members. Today, one generation later, its membership exceeds 26,000. The Society's latest convention, in San Diego, drew 23,052 souls, making it one of the biggest professional conventions in the country. In the venerable field of academic philosophy, young faculty members are jumping ship in embarrassing numbers and shifting into neuroscience. They are heading for the laboratories. Why wrestle with Kant's God, Freedom, and Immortality when it is only a matter of time before neuroscience, probably through brain imaging, reveals the actual physical mechanism that sends these mental constructs, these illusions, synapsing up into the Broca's and Wernicke's areas of the brain?

Which brings us to the second most famous statement in all of modern philosophy: Nietzsche's "God is dead." The year was 1882. (The book was Die Fröhliche Wissenschaft [ The Gay Science ].) Nietzsche said this was not a declaration of atheism, although he was in fact an atheist, but simply the news of an event. He called the death of God a "tremendous event," the greatest event of modern history. The news was that educated people no longer believed in God, as a result of the rise of rationalism and scientific thought, including Darwinism, over the preceding 250 years. But before you atheists run up your flags of triumph, he said, think of the implications. "The story I have to tell," wrote Nietzsche, "is the history of the next two centuries." He predicted (in Ecce Homo ) that the twentieth century would be a century of "wars such as have never happened on earth," wars catastrophic beyond all imagining. And why? Because human beings would no longer have a god to turn to, to absolve them of their guilt; but they would still be racked by guilt, since guilt is an impulse instilled in children when they are very young, before the age of reason. As a result, people would loathe not only one another but themselves. The blind and reassuring faith they formerly poured into their belief in God, said Nietzsche, they would now pour into a belief in barbaric nationalistic brotherhoods: "If the doctrines...of the lack of any cardinal distinction between man and animal, doctrines I consider true but deadly"--he says in an allusion to Darwinism in Untimely Meditations --"are hurled into the people for another generation...then nobody should be surprised when...brotherhoods with the aim of the robbery and exploitation of the non-brothers...will appear in the arena of the future."

Nietzsche's view of guilt, incidentally, is also that of neuro-scientists a century later. They regard guilt as one of those tendencies imprinted in the brain at birth. In some people the genetic work is not complete, and they engage in criminal behavior without a twinge of remorse--thereby intriguing criminologists, who then want to create Violence Initiatives and hold conferences on the subject.

Nietzsche said that mankind would limp on through the twentieth century "on the mere pittance" of the old decaying God-based moral codes. But then, in the twenty-first, would come a period more dreadful than the great wars, a time of "the total eclipse of all values" (in The Will to Power ). This would also be a frantic period of "revaluation," in which people would try to find new systems of values to replace the osteoporotic skeletons of the old. But you will fail, he warned, because you cannot believe in moral codes without simultaneously believing in a god who points at you with his fearsome forefinger and says "Thou shalt" or "Thou shalt not."

Why should we bother ourselves with a dire prediction that seems so far-fetched as "the total eclipse of all values"? Because of man's track record, I should think. After all, in Europe, in the peaceful decade of the 1880s, it must have seemed even more far-fetched to predict the world wars of the twentieth century and the barbaric brotherhoods of Nazism and Communism. Ecce vates! Ecce vates! Behold the prophet! How much more proof can one demand of a man's powers of prediction?

A hundred years ago those who worried about the death of God could console one another with the fact that they still had their own bright selves and their own inviolable souls for moral ballast and the marvels of modern science to chart the way. But what if, as seems likely, the greatest marvel of modern science turns out to be brain imaging? And what if, ten years from now, brain imaging has proved, beyond any doubt, that not only Edward O. Wilson but also the young generation are, in fact, correct?

The elders, such as Wilson himself and Daniel C. Dennett, the author of Darwin's Dangerous Idea: Evolution and the Meanings of Life , and Richard Dawkins, author of The Selfish Gene and The Blind Watchmaker , insist that there is nothing to fear from the truth, from the ultimate extension of Darwin's dangerous idea. They present elegant arguments as to why neuroscience should in no way diminish the richness of life, the magic of art, or the righteousness of political causes, including, if one need edit, political correctness at Harvard or Tufts, where Dennett is Director of the Center for Cognitive Studies, or Oxford, where Dawkins is something called Professor of Public Understanding of Science. (Dennett and Dawkins, every bit as much as Wilson, are earnestly, feverishly, politically correct.) Despite their best efforts, however, neuroscience is not rippling out into the public on waves of scholarly reassurance. But rippling out it is, rapidly. The conclusion people out beyond the laboratory walls are drawing is: The fix is in! We're all hardwired! That, and: Don't blame me! I'm wired wrong!

From nurture to nature

This sudden switch from a belief in Nurture, in the form of social conditioning, to Nature, in the form of genetics and brain physiology, is the great intellectual event, to borrow Nietzsche's term, of the late twentieth century. Up to now the two most influential ideas of the century have been Marxism and Freudianism. Both were founded upon the premise that human beings and their "ideals"--Marx and Freud knew about quotation marks, too--are completely molded by their environment. To Marx, the crucial environment was one's social class; "ideals" and "faiths" were notions foisted by the upper orders upon the lower as instruments of social control. To Freud, the crucial environment was the Oedipal drama, the unconscious sexual plot that was played out in the family early in a child's existence. The "ideals" and "faiths" you prize so much are merely the parlor furniture you feature for receiving your guests, said Freud; I will show you the cellar, the furnace, the pipes, the sexual steam that actually runs the house. By the mid-1950s even anti-Marxists and anti-Freudians had come to assume the centrality of class domination and Oedipally conditioned sexual drives. On top of this came Pavlov, with his "stimulus-response bonds," and B. F. Skinner, with his "operant conditioning," turning the supremacy of conditioning into something approaching a precise form of engineering.

So how did this brilliant intellectual fashion come to so screeching and ignominious an end?

The demise of Freudianism can be summed up in a single word: lithium. In 1949 an Australian psychiatrist, John Cade, gave five days of lithium therapy--for entirely the wrong reasons--to a fifty-one-year-old mental patient who was so manic-depressive, so hyperactive, unintelligible, and uncontrollable, he had been kept locked up in asylums for twenty years. By the sixth day, thanks to the lithium buildup in his blood, he was a normal human being. Three months later he was released and lived happily ever after in his own home. This was a man who had been locked up and subjected to two decades of Freudian logorrhea to no avail whatsoever. Over the next twenty years antidepressant and tranquilizing drugs completely replaced Freudian talk-talk as treatment for serious mental disturbances. By the mid-1980s, neuroscientists looked upon Freudian psychiatry as a quaint relic based largely upon superstition (such as dream analysis -- dream analysis!), like phrenology or mesmerism. In fact, among neuroscientists, phrenology now has a higher reputation than Freudian psychiatry, since phrenology was in a certain crude way a precursor of electroencephalography. Freudian psychiatrists are now regarded as old crocks with sham medical degrees, as ears with wire hairs sprouting out of them that people with more money than sense can hire to talk into.

Marxism was finished off even more suddenly--in a single year, 1973--with the smuggling out of the Soviet Union and the publication in France of the first of the three volumes of Aleksandr Solzhenitsyn's The Gulag Archipelago . Other writers, notably the British historian Robert Conquest, had already exposed the Soviet Union's vast network of concentration camps, but their work was based largely on the testimony of refugees, and refugees were routinely discounted as biased and bitter observers. Solzhenitsyn, on the other hand, was a Soviet citizen, still living on Soviet soil, a zek himself for eleven years, zek being Russian slang for concentration camp prisoner. His credibility had been vouched for by none other than Nikita Khrushchev, who in 1962 had permitted the publication of Solzhenitsyn's novella of the gulag, One Day in the Life of Ivan Denisovich , as a means of cutting down to size the daunting shadow of his predecessor Stalin. "Yes," Khrushchev had said in effect, "what this man Solzhenitsyn has to say is true. Such were Stalin's crimes." Solzhenitsyn's brief fictional description of the Soviet slave labor system was damaging enough. But The Gulag Archipelago , a two-thousand-page, densely detailed, nonfiction account of the Soviet Communist Party's systematic extermination of its enemies, real and imagined, of its own countrymen, by the tens of millions through an enormous, methodical, bureaucratically controlled "human sewage disposal system," as Solzhenitsyn called it-- The Gulag Archipelago was devastating. After all, this was a century in which there was no longer any possible ideological detour around the concentration camp. Among European intellectuals, even French intellectuals, Marxism collapsed as a spiritual force immediately. Ironically, it survived longer in the United States before suffering a final, merciful coup de gr ce on November 9, 1989, with the breaching of the Berlin Wall, which signaled in an unmistakable fashion what a debacle the Soviets' seventy-two-year field experiment in socialism had been. (Marxism still hangs on, barely, acrobatically, in American universities in a Mannerist form known as Deconstruction, a literary doctrine that depicts language itself as an insidious tool used by The Powers That Be to deceive the proles and peasants.)

Freudianism and Marxism--and with them, the entire belief in social conditioning--were demolished so swiftly, so suddenly, that neuroscience has surged in, as if into an intellectual vacuum. Nor do you have to be a scientist to detect the rush.

Anyone with a child in school knows the signs all too well. I have children in school, and I am intrigued by the faith parents now invest--the craze began about 1990--in psychologists who diagnose their children as suffering from a defect known as attention deficit disorder, or ADD. Of course, I have no way of knowing whether this "disorder" is an actual, physical, neurological condition or not, but neither does anybody else in this early stage of neuroscience. The symptoms of this supposed malady are always the same. The child, or, rather, the boy--forty-nine out of fifty cases are boys--fidgets around in school, slides off his chair, doesn't pay attention, distracts his classmates during class, and performs poorly. In an earlier era he would have been pressured to pay attention, work harder, show some self-discipline. To parents caught up in the new intellectual climate of the 1990s, that approach seems cruel, because my little boy's problem is... he's wired wrong! The poor little tyke --the fix has been in since birth! Invariably the parents complain, "All he wants to do is sit in front of the television set and watch cartoons and play Sega Genesis." For how long? "How long? For hours at a time." Hours at a time; as even any young neuroscientist will tell you, that boy may have a problem, but it is not an attention deficit.

Nevertheless, all across America we have the spectacle of an entire generation of little boys, by the tens of thousands, being dosed up on ADD's magic bullet of choice, Ritalin, the CIBA-Geneva Corporation's brand name for the stimulant methylphenidate. I first encountered Ritalin in 1966 when I was in San Francisco doing research for a book on the psychedelic or hippie movement. A certain species of the genus hippie was known as the Speed Freak, and a certain strain of Speed Freak was known as the Ritalin Head. The Ritalin Heads loved Ritalin. You'd see them in the throes of absolute Ritalin raptures...Not a wiggle, not a peep...They would sit engrossed in anything at all...a manhole cover, their own palm wrinkles...indefinitely...through shoulda-been mealtime after mealtime...through raging insomnias...Pure methyl-phenidate nirvana...From 1990 to 1995, CIBA-Geneva's sales of Ritalin rose 600 percent; and not because of the appetites of subsets of the species Speed Freak in San Francisco, either. It was because an entire generation of American boys, from the best private schools of the Northeast to the worst sludge-trap public schools of Los Angeles and San Diego, was now strung out on methylphenidate, diligently doled out to them every day by their connection, the school nurse. America is a wonderful country! I mean it! No honest writer would challenge that statement! The human comedy never runs out of material! It never lets you down!

Meantime, the notion of a self--a self who exercises self-discipline, postpones gratification, curbs the sexual appetite, stops short of aggression and criminal behavior--a self who can become more intelligent and lift itself to the very peaks of life by its own bootstraps through study, practice, perseverance, and refusal to give up in the face of great odds--this old-fashioned notion (what's a boot strap, for God's sake?) of success through enterprise and true grit is already slipping away, slipping away...slipping away...The peculiarly American faith in the power of the individual to transform himself from a helpless cypher into a giant among men, a faith that ran from Emerson ("Self-Reliance") to Horatio Alger's Luck and Pluck stories to Dale Carnegie's How to Win Friends and Influence People to Norman Vincent Peale's The Power of Positive Thinking to Og Mandino's The Greatest Salesman in the World --that faith is now as moribund as the god for whom Nietzsche wrote an obituary in 1882. It lives on today only in the decrepit form of the "motivational talk," as lecture agents refer to it, given by retired football stars such as Fran Tarkenton to audiences of businessmen, most of them woulda-been athletes (like the author of this article), about how life is like a football game. "It's late in the fourth period and you're down by thirteen points and the Cowboys got you hemmed in on your own one-yard line and it's third and twenty-three. Whaddaya do?..."

Sorry, Fran, but it's third and twenty-three and the genetic fix is in, and the new message is now being pumped out into the popular press and onto television at a stupefying rate. Who are the pumps? They are a new breed who call themselves "evolutionary psychologists." You can be sure that twenty years ago the same people would have been calling themselves Freudian; but today they are genetic determinists, and the press has a voracious appetite for whatever they come up with.

The most popular study currently--it is still being featured on television news shows, months later--is David Lykken and Auke Tellegen's study at the University of Minnesota of two thousand twins that shows, according to these two evolutionary psychologists, that an individual's happiness is largely genetic. Some people are hardwired to be happy and some are not. Success (or failure) in matters of love, money, reputation, or power is transient stuff; you soon settle back down (or up) to the level of happiness you were born with genetically. Three months ago Fortune devoted a long takeout, elaborately illustrated, of a study by evolutionary psychologists at Britain's University of Saint Andrews showing that you judge the facial beauty or handsomeness of people you meet not by any social standards of the age you live in but by criteria hardwired in your brain from the moment you were born. Or, to put it another way, beauty is not in the eye of the beholder but embedded in his genes. In fact, today, in the year 1996, barely three years before the end of the millennium, if your appetite for newspapers, magazines, and television is big enough, you will quickly get the impression that there is nothing in your life, including the fat content of your body, that is not genetically predetermined. If I may mention just a few things the evolutionary psychologists have illuminated for me over the past two months:

The male of the human species is genetically hardwired to be polygamous, i.e., unfaithful to his legal mate. Any magazine-reading male gets the picture soon enough. (Three million years of evolution made me do it!) Women lust after male celebrities, because they are genetically hardwired to sense that alpha males will take better care of their offspring. (I'm just a lifeguard in the gene pool, honey.) Teenage girls are genetically hardwired to be promiscuous and are as helpless to stop themselves as dogs in the park. (The school provides the condoms.) Most murders are the result of genetically hardwired compulsions. (Convicts can read, too, and they report to the prison psychiatrist: "Something came over me...and then the knife went in." 2 )

Where does that leave self-control? Where, indeed, if people believe this ghostly self does not even exist, and brain imaging proves it, once and for all?

So far, neuroscientific theory is based largely on indirect evidence, from studies of animals or of how a normal brain changes when it is invaded (by accidents, disease, radical surgery, or experimental needles). Darwin II himself, Edward O. Wilson, has only limited direct knowledge of the human brain. He is a zoologist, not a neurologist, and his theories are extrapolations from the exhaustive work he has done in his specialty, the study of insects. The French surgeon Paul Broca discovered Broca's area, one of the two speech centers of the left hemisphere of the brain, only after one of his patients suffered a stroke. Even the PET scan and the PET reporter gene/PET reporter probe are technically medical invasions, since they require the injection of chemicals or viruses into the body. But they offer glimpses of what the noninvasive imaging of the future will probably look like. A neuroradiologist can read a list of topics out loud to a person being given a PET scan, topics pertaining to sports, music, business, history, whatever, and when he finally hits one the person is interested in, a particular area of the cerebral cortex actually lights up on the screen. Eventually, as brain imaging is refined, the picture may become as clear and complete as those see-through exhibitions, at auto shows, of the inner workings of the internal combustion engine. At that point it may become obvious to everyone that all we are looking at is a piece of machinery, an analog chemical computer, that processes information from the environment. "All," since you can look and look and you will not find any ghostly self inside, or any mind, or any soul.

Thereupon, in the year 2006 or 2026, some new Nietzsche will step forward to announce: "The self is dead"--except that being prone to the poetic, like Nietzsche I, he will probably say: "The soul is dead." He will say that he is merely bringing the news, the news of the greatest event of the millennium: "The soul, that last refuge of values, is dead, because educated people no longer believe it exists." Unless the assurances of the Wilsons and the Dennetts and the Dawkinses also start rippling out, the lurid carnival that will ensue may make the phrase "the total eclipse of all values" seem tame.

The two most fascinating riddles of the 21st century

If I were a college student today, I don't think I could resist going into neuroscience. Here we have the two most fascinating riddles of the twenty-first century: the riddle of the human mind and the riddle of what happens to the human mind when it comes to know itself absolutely. In any case, we live in an age in which it is impossible and pointless to avert your eyes from the truth.

Ironically, said Nietzsche, this unflinching eye for truth, this zest for skepticism, is the legacy of Christianity (for complicated reasons that needn't detain us here). Then he added one final and perhaps ultimate piece of irony in a fragmentary passage in a notebook shortly before he lost his mind (to the late-nineteenth-century's great venereal scourge, syphilis). He predicted that eventually modern science would turn its juggernaut of skepticism upon itself, question the validity of its own foundations, tear them apart, and self-destruct. I thought about that in the summer of 1994 when a group of mathematicians and computer scientists held a conference at the Santa Fe Institute on "Limits to Scientific Knowledge." The consensus was that since the human mind is, after all, an entirely physical apparatus, a form of computer, the product of a particular genetic history, it is finite in its capabilities. Being finite, hardwired, it will probably never have the power to comprehend human existence in any complete way. It would be as if a group of dogs were to call a conference to try to understand The Dog. They could try as hard as they wanted, but they wouldn't get very far. Dogs can communicate only about forty notions, all of them primitive, and they can't record anything. The project would be doomed from the start. The human brain is far superior to the dog's, but it is limited nonetheless. So any hope of human beings arriving at some final, complete, self-enclosed theory of human existence is doomed, too.

This, science's Ultimate Skepticism, has been spreading ever since then. Over the past two years even Darwinism, a sacred tenet among American scientists for the past seventy years, has been beset by...doubts. Scientists--not religiosi--notably the mathematician David Berlinski ("The Deniable Darwin," Commentary , June 1996) and the biochemist Michael Behe (Darwin's Black Box , 1996), have begun attacking Darwinism as a mere theory, not a scientific discovery, a theory woefully unsupported by fossil evidence and featuring, at the core of its logic, sheer mush. (Dennett and Dawkins, for whom Darwin is the Only Begotten, the Messiah, are already screaming. They're beside themselves, utterly apoplectic. Wilson, the giant, keeping his cool, has remained above the battle.) By 1990 the physicist Petr Beckmann of the University of Colorado had already begun going after Einstein. He greatly admired Einstein for his famous equation of matter and energy, E=mc2 , but called his theory of relativity mostly absurd and grotesquely untestable. Beckmann died in 1993. His Fool Killer's cudgel has been taken up by Howard Hayden of the University of Connecticut, who has many admirers among the upcoming generation of Ultimately Skeptical young physicists. The scorn the new breed heaps upon quantum mechanics ("has no real-world applications"..."depends entirely on fairies sprinkling goofball equations in your eyes"), Unified Field Theory ("Nobel worm bait"), and the Big Bang Theory ("creationism for nerds") has become withering. If only Nietzsche were alive! He would have relished every minute of it!

Recently I happened to be talking to a prominent California geologist, and she told me: "When I first went into geology, we all thought that in science you create a solid layer of findings, through experiment and careful investigation, and then you add a second layer, like a second layer of bricks, all very carefully, and so on. Occasionally some adventurous scientist stacks the bricks up in towers, and these towers turn out to be insubstantial and they get torn down, and you proceed again with the careful layers. But we now realize that the very first layers aren't even resting on solid ground. They are balanced on bubbles, on concepts that are full of air, and those bubbles are being burst today, one after the other."

I suddenly had a picture of the entire astonishing edifice collapsing and modern man plunging headlong back into the primordial ooze. He's floundering, sloshing about, gulping for air, frantically treading ooze, when he feels something huge and smooth swim beneath him and boost him up, like some almighty dolphin. He can't see it, but he's much impressed. He names it God.