Saturday, April 16, 2005
This Day:

From time to time, the planet Earth suffered mass extinctions. The last big one happened about 65 million years ago (Cretaceous ends, Paleogene begins) and wiped out the dinosaurs, and many other species. But the biggest one happened about 250 million years ago, and the event marks the transition from the Permian geological period to the Triassic. Roughly 90 percent of all marine life died, as well as nearly three-quarters of all land plants and animals! Now a new study finds that this great dying might have caused a lack of oxygen that left animals gasping for air.

Pangea Break-up (Courtesy: USGS)
Back then, the Earth's continents were glued as a single large super-continent, called the Pangea. The reason behind the Great Dying is not known, and different hypotheses have been put forward (asteroid impact, global warming, supervolcanoes). The Lack-of-Oxygen scenario (put forward by Raymond Huey and Peter Ward of the University of Washington) might just be what really happened.
Oxygen currently makes up about 21% of our atmosphere. In the early Permian period it was 30%, from where it fell to 16% during the Great Dying, and to 12% after that. Such low levels of oxygen meant that animals at sea level breathed air similar to that at the top of a 17,400-foot mountain today! At higher elevations, the oxygen content would be still lower. Effectively, this would restrict the movements of the animals living during that time. Populations of animals might be fragmented, leading to their speedy demise.
Even though just the lack of oxygen might not be the sole reason, it could be a large factor in what happened. Perhaps the living kingdom suffered a double whammy, a lack of oxygen, coupled with some other major event that precipitated the mass extinction.

(Hide) (Show)

11 Comments:

At April 18, 2005 5:04 AM, Blogger Onkroes said...
Could such a small reduction in oxygen levels cause such a massive impact? It seems like the oxygen reduction is a symptom rather than a cause in itself.

Also, are there other potential extinction causes that could have caused the lowered oxygen levels. I haven't read through the original source, but do they hypothesize why the oxygen levels lowered?
 
At April 18, 2005 5:14 AM, Blogger Sray said...
"Also, are there other potential extinction causes that could have caused the lowered oxygen levels."

That is true. What this study suggested is the mechanism by which a reduced oxygen (which did happen) can affect evolution and extinction.

Oxygen percentage went down from 20% to 12% in only 10 million years, and this is quite rapid in geological terms.

About why oxygen went down, there are competing hypotheses. For example, one computer model suggests that as the poles were warmed, the circulation of waters slowed down and stopped. The period of stagnation was brief, only a few hundred years, but the reduced overturning of waters was enough to trigger a greater zone of low oxygen levels in both the intermediate waters (beginning at about 100 meters from the surface) and on the sea floor.
 
At April 18, 2005 9:17 AM, Blogger wise donkey said...
this is interesting:)
but i didnt understand why circulation of waters wil reduce oxygen level.
and what are the chances of it happening again?
 
At April 18, 2005 11:41 PM, Blogger Sray said...
WD: if the circulation of the waters (see my post on North-South climates linked) stops, the oxygen level in the sea water goes down, as less oxygen is absorbed from the atmosphere (stagnant pools have less oxygen content than flowing water). Consequently, sea life suffers, and slowly, the whole ecosystem (both marine and land) might collapse.
 
At April 18, 2005 11:42 PM, Blogger Sray said...
Gindy: dont worry, we wont live long enough to go through another of such declines again :-D.. unless there is a wide-scale pollution, or another supervolcano!
 
At April 19, 2005 2:07 AM, Blogger wise donkey said...
i read the post. but guess what i cant understand is why stagnant pools have less oxygen content than flowing water.
does this oxygen level also have anythin to do with the fresh water aspect?
 
At April 19, 2005 2:21 AM, Blogger Sray said...
WD: Flowing water has more oxygen that stagnant water, for mainly two reasons.

1) Flowing water has turbulence at the surface, which allows for greater air-water mixing. Stagnant water has a constant surface (with surface tension) that prohibits oxygen absorption at the levels flowing water can.
2) Water is constant cycled in flowing water. In stagnant waters, the upper layers get saturated with oxygen, and then the lower layers are deprived of oxygen.

Oxygen is lost from the water when temperature rises (due to more evaporation, and higher kinetic energies). That can cause mass dying in the marine world.
 
At April 19, 2005 2:59 AM, Blogger wise donkey said...
Wow you do have a knack of explainin in simple terms.
(and i guess its tough explainin to a person who doesnt know what kinetic energy is:D)

now i understood the post. but if u dont mind, another question.

i thought water was just h20 and that the hydrogen-oxygen content would be fixed.

what i want to know if there are oxygen levels for water, would it make any difference for human beings while drinking water with more oxygen etc?
 
At April 19, 2005 3:27 AM, Blogger Sray said...
Well, water is H2O, but atmospheric oxygen dissolves in water (the actual concentration depends upon a lot of factors, like salinity, temperature, and pressure). As a matter of fact, the fish use this dissolved oxygen to breathe! They use their gills to filter out oxygen from the water. Remove the dissolved oxygen, and the fish will die.

The oxygen amount is minute, so it really doesnt make a difference while drinking such water :-).
 
At April 19, 2005 6:47 AM, Blogger wise donkey said...
ok:)
 
At April 19, 2005 3:38 PM, Blogger Sray said...
I agree with him too at some level... but the expletives were totally uncalled for.
 

Post a Comment

Friday, April 15, 2005
This Day:

In the futuristic movie Minority Report, the lead character (portrayed by Tom Cruise) manipulates on-screen images by using a gloved hand. A video-camera picks up the 3D hand-motions, and the computer translates the motions into actions on the display. Now researchers from the defence company Raytheon have come up with a prototype that does exactly that. They have even employed John Underkoffler, the researcher who proposed the interface to the makers of the film!

Talk to the hand! (Courtesy: Raytheon)
Current user interfaces (keyboard, mouse) work in a 2D world, which limits their applications. Even supposedly 3D interfaces (trackballs, game-controllers) are not truly 3D, as you need more than one control for accurate 3D navigation. The system in Raytheon allows the user to wear a a pair of reflective gloves and manipulate images projected on a panoramic screen. A camera takes live pictures, and the computer processes the images to translate into motion on the screen.
Such technology can be used to sort through large volumes of data (for example, satellite imagery, intelligence data), and also might help doctors to perform virtual surgery. It also has huge potential in gaming and virtual reality-based applications.

(Hide) (Show)

7 Comments:

At April 17, 2005 1:53 PM, Blogger Sray said...
Soon we will be wirelessly hooked up with everything, and will just have to 'think', and the work is done :-). Will that be the pinnacle of evolution? I dont know... Asimov's "Foundation Series" comes to mind....
 
At April 17, 2005 3:06 PM, Blogger wise donkey said...
interesting sray..

but just "think" pinnacle of evolution , no i dont think..
(no idea on Asimov)
 
At April 17, 2005 3:30 PM, Blogger LEMNA said...
Emm,I heard about something like this before,'n someday I knew somebody whom was workin' on these fields...It is really nice...
 
At April 17, 2005 4:49 PM, Blogger Sray said...
WD: just imagine that your brain is fitted with a device, that communicates with the rest of the world. It is like a wireless computer, directly interfaced with you brain. Such things will happen in the future... but hacking will take on a new meaning then!
 
At April 17, 2005 4:50 PM, Blogger Sray said...
Dear Dear Lemna, people all over the world are working on such things. It is the hottest thing right now! And there are so many applications, the only limit is your imagination :-).
 
At April 22, 2005 6:57 AM, Blogger Wayne Smallman said...
There's one obvious problem with the device used in the Minority Report, and it became obvious to me the first moment I saw the trailer for the film: after about five minutes of sustained use, you'd end up with horrendous back, arm and neck ache.

Looks good, but such a device clearly never passed through the qualified hands of an ergonomist.

Use such a thing for too long, and you're sure to pass through the qualified hands of a physiotherapist...
 
At April 23, 2005 11:49 AM, Blogger Sray said...
I think the next better thing is already on the horizon.. that is using the brain dirctly to move objects or do thnigs. One would have an interface with the brain, that will send signals to the appliances, and things will get done :-D. No need of a physiotherapist there! Psychiatrists, perhaps :-).
 

Post a Comment

Thursday, April 14, 2005
This Day:

Scientists at the Australian National University might have found one of the earliest stars to have formed in the Universe. The star, called HE 1327-2326 has the lowest level of Iron found in any star, and was discovered by Anna Frebel and her team (findings published in the journal Nature). Since heavier elements such as Iron are supposed to have (mostly) formed through supernovae explosions, higher concentrations of Iron are found in successive generation of stars. This star was observed using the Japanese Subaru telescope (8 meters), and found to be twice as iron poor as the previous record holder.

The first star (Courtesy: STSCI)
The Japanese telescope has other impressive observations under its belt; it recently found one of the most distant galaxies. This new star-find is crucially important, as it provides the crucial evidence of the time when the very first stars formed after the Big Bang. The star will be used to trace the development of elements in the early Universe, to see if the current theoretical models regarding creation of elements are accurate. Such models are often derived using particle accelerator experiments, and it is awe-inspiring to see the domain of the large (stars, galaxies) and of the small (fundamental particles) come together in this way :).
The star also has abnormally high levels of Carbon, Strontium, and Nitrogen. Another interesting feature is that little (if any) Lithium is found in the star. This is higly unusual, as Lithium is one of the lightest elements (Atomic Number = 3), and relatively newer stars have more Lithium content, and follow theoretical models closely.
Explanations? It could very well be that the ancient cloud of gas that formed the star was rich in elements heavier than Hydrogen, Helium, and Lithium. But it is very unlikely; the more likely scenario is that the star was formed relatively early, and is one of the true first generation stars.

(Hide) (Show)

7 Comments:

At April 16, 2005 9:36 AM, Blogger broomhilda said...
That's pretty cool!
 
At April 16, 2005 11:55 AM, Blogger Sray said...
Yup, it is. Now you can launch your solar sail from the lunar north pole :-).
 
At April 16, 2005 8:08 PM, Blogger broomhilda said...
Sailin' back in time! :-D
 
At April 17, 2005 1:54 PM, Blogger Sray said...
Well, there is a nice train of reasoning behind it... and observations in other stars that do have iron...
 
At April 17, 2005 3:09 PM, Blogger wise donkey said...
Wow :)and any idea how old it is?
 
At April 17, 2005 4:52 PM, Blogger Sray said...
Oops... forgot to add that info :-(. The star is about 13-14 billion years old!
 
At April 18, 2005 8:57 AM, Blogger wise donkey said...
phew !
 

Post a Comment

Wednesday, April 13, 2005
This Day:

Due to a lack of atmosphere, the temperature at the surface of the moon varies wildly, depending mostly on the day-night cycle. In the day, the temperature of the Moon averages 107°C, although it rises as high as 123°C. The night cools the surface to an average of -153°C, or -233°C in the permanently shaded South Polar Basin. A typical non-polar minimum temperature is -181°C (at the Apollo 15 site). Regions with wild variations in temperature are not preferable for designed a lunar base, where humans have to be able to live round-the-clock, and thus spots with more habitable climates have to be located. Now, scientists have perhaps found such a perfect spot at the lunar North Pole.

Lunar Illumination Map (Courtesy: MSNBC)
Any such region should have permanent sunlight, so as to allow a) round-the-clock solar energy, and b) a near-constant temperature. Such spots can only be found near the lunar poles; in particular, scientists have identified that the best spot to settle on the moon may be on the northern rim of Peary crater, close to the lunar North Pole. The analysis, to be published in the April 14 issue of the journal Nature, is based on 53 images from the spacecraft Clementine, which orbited the moon for 71 days in 1994.
Unlike Earth, whose extreme tilt causes seasons, the Moon's rotational axis is almost perfectly upright, deviating just 1.5 percent from the main plane of the solar system that extends outward from the Sun's belly. On Earth, summer means constant sunlight at the North Pole, and winter plunges the Arctic into permanent darkness. But on the moon, theorists have long suspected there might be high points from which the sun is always visible.
The southern pole also has such sweet spots. However, the northern pole has more water, and that is a critical point that tilted the decision in favor of the northern spot.

(Hide) (Show)

4 Comments:

At April 15, 2005 12:23 PM, Blogger Sray said...
Firstly, "Ever" is a long time :-).

Certain polar ridges on the moon get near constant sunlight. It is these regions that we are talking about. They should have near constant temperature.
 
At April 15, 2005 10:52 PM, Blogger Sray said...
One of my early moon scifi favorite was "Earthlight" by A.C. Clarke. Wish such things happen in our lifetime!
 
At April 17, 2005 3:18 PM, Blogger wise donkey said...
pretty interesting:)
 
At April 17, 2005 3:19 PM, Blogger wise donkey said...
pretty interesting:)
 

Post a Comment

Tuesday, April 12, 2005
This Day:

Bose-Einstein Condensate (BEC) is a super-fluid phase formed by atoms cooled to very close to Absolute Zero (about -273.15°C). At this temperature, the atoms get into a single quantum state (essentially behaving as one super-atom), and exhibit some really interesting phenomena. Now, Harvard University scientists (Lene Hau and group) have shown that these ultra-cold atoms can essentially 'freeze' and 'control' light, and form a processing unit of an optical computer :):). Optical computers would transport information ten times faster than traditional electronic devices (essentially at the speed of light), thereby smashing the intrinsic speed limit of silicon technology.

Stopped Light! (Courtesy: Hau's Lab)
Professor Hau's group was previously able to slow down the speed of light (299,792,458 meters/second, or 186,000 miles/second, in vacuum) to about the speed of a bicycle, by using a cloud of BEC composed of sodium atoms. The same apparatus now is able to stop the motion of light alltogether! One applications of this could be in memory storage for a future generation of optical computers.
But the really striking discovery is that such frozen light can be used to do 'computations'!! The amplitude and phase of 'moving' light is smeared out over a distance (imagine a ripple on a lake). However, these characteristics are essentially frozen in stationary light, and thus can be used to a) store information (act as memory), and b) combined with amplitude/phase from other light particles (photons) to form rudimentary computational units (e.g. for addition and subtraction: just as two ripples sometimes form a bigger ripple when they cross each other). Combining many such units could one day lead to a true optical computer.
In addition to the beauty of such a structure, such optical processors will far exceed the capabilities of the electronic computers of today. Photons are much smaller than electrons, and carry no charge. Hence they can be packed in a much smaller space, thus perhaps leading to a higher density of such computational units than can be achieved in modern computers.
Hopefully, this century (and next) will belong to the quantum and the optical comptuers, just as the last one saw the rise of their electronic brethren :):).

(Hide) (Show)

8 Comments:

At April 13, 2005 1:14 AM, Blogger LEMNA said...
Hey, nice news,you know that I got happy hearin' that. But how can they control the interference on lights?Will they go in a special direction?The fact that photons carry no charge but electrons do,may change the base of the inf. transportation.Will the heat produced descreased too??
 
At April 13, 2005 5:56 AM, Blogger Sray said...
Imagine a bunch of frozen photons. Each photon can have a different amplitude and phase, and they wont change with time! So, when two such photons interfere, they would generate new photon which is a superposition of the two.

Theoretically, say the first photon has amplitude A1 and phase P1. The second has amplitude A2 and phase P2. When these two interfere, you get a new photon with amplitude A1*cos(P1)+A2*cos(P2)! If P1 and P2 have suitable values, you can simulate addition (A1+A2), or subtraction (A1-A2).

Once addition and subtraction can be done, you can do every other operation.

The heat produced should be less, as a lot of heat in current processors is due to electron scattering, which is partly because electrons carry a charge. Photons are chargeless, so should create less heat. Also, since the setup is at near absolute zero, and nothing much is moving, makes heat generation pretty low!

:-D:-D.
 
At April 13, 2005 4:53 PM, Blogger Sray said...
It is not absorbed, just hangs there. It is hard to visualize, I know :-)... but you have accept some things in quantum mechanics.

Perhaps it is helpful to imagine a point of light, hanging in the air?
 
At April 13, 2005 5:02 PM, Blogger Wayne Smallman said...
If I've read this correctly, there's an obvious downside to all of this.

This memory cannot work persistently.

To put it another way, once the plug is pulled, the state of the memory is lost.

So this kind of thing couldn't function like some of the other memory types you use in common electronic devices like mobile / cell phones and the like...
 
At April 13, 2005 5:43 PM, Blogger Sray said...
That is true. But perhaps it can replace the RAM and the processor of today, and the hard-drives can move on to holographic storage (see one my earlier posts here). That will be quite something!!
 
At April 14, 2005 10:49 PM, Blogger Sray said...
Yeah, that is a case of radiative cooling. The temperature of a gas is proportional to its energy, which is a sum of the kinetic, rotational, vibrational, and electronic energies. In such cooling, there are a few steps.

a) First remove all the faster atoms in the gas. That reduces the temperature quite a bit.
b) Slowly confine the remaining atoms in a smaller volume (by magnetic traps)
c) Hit them with lasers, which carry away energy on reflection.

"If these two technologies are combined, we'd soon see the emergence of mainframe-sized computers able to do more than all the computers out there today."

Yup! Then there are spintronic devices that will be soon on the market... so this century will belong to quantum computers, holographic storage, and brand new fields such as spintronics, superconductive electronics, and so on :-D.
 
At April 16, 2005 10:42 AM, Blogger wise donkey said...
amazing !!
 
At April 16, 2005 7:14 PM, Blogger Sray said...
Cute, isnt it?
 

Post a Comment

Monday, April 11, 2005
This Day:

Scientists at the University of Illinois at Urbana-Champaign have developed the fastest transistor (a typical computer processor contains millions of these). Working at the blinding speed of 604GHz, the new device was built from compounds called Indium Phosphide and Indium Gallium Arsenide that were designed to reduce data-transit time and improve density. The same group established an earlier record in 2003, when they made a transistor work at 509GHz (by breaking their own records of 452GHz and 382GHz).

Light emitting transistor (Courtesy: UIUC)
The research was funded by the U.S. military through a $5.9 million DARPA grant. The results were published by Milton Feng and his student Walid Hafez in the journal Applied Physics Letters.
This new transistor paves the way to the creation of tera-hertz (1000+ GHz) devices. The researchers employed a technique known as Pseudomorphic grading, in which selective doping of the base, collector and emitter regions of the transistor results in a lower band-gap, and therefore, higher speed.
Once wired by the thousands into circuits, this faster transistor could improve the quality and battery life of high-frequency electronics like cell phones. This work can also lead to faster and more energy efficient computers and communication networks: the same group created the world's first light emitting transistor in 2003, which when combined with this transistor, can make high-speed (Tera-Hertz) fiber-optic communications possible.

(Hide) (Show)

5 Comments:

At April 12, 2005 4:38 AM, Blogger Onkroes said...
The one thing that ran through my mind reading this article was Redundancy.

I wonder what the refresh rate is of technology on the ground. In other words, there are millions of mobile/cell phones, pda's, pc's, and other micro-electronic devices out there using 'old' chips. How long will it take this 'new' technology to work it's way through to the lowest rung on the technology chain?

And where will the cutting edge be when that happens? Probably with cellular gel-pack processing modules, or some such!
 
At April 12, 2005 6:47 AM, Blogger wise donkey said...
interesting:)
and wondering like onkroes :)
 
At April 12, 2005 7:43 AM, Blogger Sray said...
Onkroes: I still have a 300MHz Pentium-II PC, sitting nicely beside a 3.2GHz laptop :-D, as I am loath to let go any piece of hardware that is still working great. So, as long as things keep working, people will keep using them!

WD: Thanks for all your comments :-). Technology is moving so fast, it is a wonder to behold!
 
At April 12, 2005 8:57 AM, Blogger wise donkey said...
:)sray

What you write is easy to understand, but since i dont know the basics, its sometimes tough to comprehend and appreciate the exact implications. But your blog is a wonderful way for me to keep in touch with science and so though i might just write just wow, interesting etc, i really appreciate your blog and find it very informative and useful:)

Keep Going:)
 
At April 12, 2005 9:02 AM, Blogger Sray said...
WD: Thanks a lot for your compliments! I understand that the posts might be hard-read sometimes, and that is why I try to put a lot of html links that point to relevant topics and concepts. It is not possible to discuss every detail in the post, even though I wish I could do that.

Thanks again! If you do not understand something in the post, dont hesitate to leave a comment, and I will try to address it :-).
 

Post a Comment

Sunday, April 10, 2005
This Day:

Solar flares are the most enigmatic of the displays that the Sun puts up for us mortals. Huge wisps of gas erupt from the solar surface, unleashing impressive displays of electro-magnetic storms, that often knock out earth-orbiting satellites, and interfere with our television broadcasts. But the processes that cause such flares are largely a mystery. Now scientists from Mullard Space Science Laboratory (MSSL) and University College London have discovered some new evidence that points to the cataclysmic events that trigger a solar flare and the mechanisms that drive its subsequent evolution.

A Solar Flare (Courtesy: SOHO)
A really large solar flare was observed by the ESA-NASA SOHO spacecraft on 15 July 2002. A detailed analysis of the flare, just published by Louise Harra of MSSL, shows that the flare was a complex event with three eruptions -each one triggering the next one like a domino effect. The solar flare's explosive power was 5,000 million times greater than an atomic bomb, hurling a billion tonnes of hot gas towards the Earth at speeds of around half a million miles an hour :):).
The analysis showed that the explosion was triggered by the sudden emergence of plasma from below the Sun's surface, close to an existing region of strong magnetic field. The two magnetic fields collided, releasing tremendous energy in form of the flare. This contradicts the current understanding of flare creation, which asserts that flares form when a magnetic line entangles and reconnects in the corona.
Now, it looks like flares can also occur when magnetic lines of force collide with each other. Since scientists can observe the growth of these lines of force on the surface of the Sun (by tracking the radiation emitted by the charged particles going around these lines), they should be able to predict when two lines will collide. This will allow scientists to predict the occurrence of such flares.
Such predictions are important, since a large flare can destroy expensive satellites, and can also be a hazard for astronauts on the International Space Station. By predicting the flares before they occur, they will have a much greater chance of hiding when the flare reaches the Earth.

(Hide) (Show)

5 Comments:

At April 11, 2005 12:30 PM, Blogger Wayne Smallman said...
I read some study material about the Sun some time last year, all very fascinating stuff.

Now one thing puzzles me. The article discussed the different layers of the Sun and how with various satellites and telescopes, scientists have been able to peer deep into the Sun. In some cases, many hundreds of thousands of miles.

How? How do they do that?
 
At April 11, 2005 12:56 PM, Blogger Sray said...
Sun is mostly gas. Different layers of the Sun are at different temperatures, and composed of mostly hydrogen. Gas at different temperatures radiate at different frequencies (see black body radiation) according to the Stefan's Law (P = sigma*A*T^4, where sigma is a constant, P is the power output, T is temperature, A is surface area). So if you look at the spectrum of the light emitted by the Sun, you will be able to decipher how much energy each layer is emitting, and what it is composed of!
 
At April 11, 2005 1:27 PM, Blogger Sray said...
Gindy, I explained the mechanism in my previous comment.. hope it is clear!

Earth's magnetic field deflects most of the flares. Only a really large flare is able to disrupt cell-phone coverage etc. by affecting the satellites. An even larger flare can cause electrical disturbances that can affect the ground (esp. power lines and stations).
 
At April 12, 2005 6:43 AM, Blogger wise donkey said...
WOW
and the pic fascinating:)
 
At April 12, 2005 5:12 PM, Blogger Sray said...
Most modern cell-phone networks have a lot of error-correction built in. These are able to avoid small-scale disruptions in service, by a) routing to other towers, b) reconstructing the signal using redundant data, and c) using a spread-spectrum technique. So small flares are mostly unable to affect digital communications, but analog stuff might get a little more snow in some cases (similar to sun fades).
 

Post a Comment