StellarWindow turns your laptop into a virtual planetarium

first_img Citation: StellarWindow turns your laptop into a virtual planetarium (2008, September 5) retrieved 18 August 2019 from StellarWindow consists of a USB stick containing a compass and accelerometer that can identify which celestial objects a user is pointing their computer at. Credit: Hobby Media. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. If you enjoy looking at the stars, but get a little impatient trying to figure out which way to hold your star map to identify the constellations, a new software program may make things easier. Called StellarWindow, the program gives you a real-time guided tour of the night sky wherever you’re looking. You simply insert a USB stick and CD into your laptop, tablet, or PC. The stick contains an embedded magnetic compass and accelerometers for sensing tilt. By pointing your computer at a certain area of the sky, the system automatically identifies the stars or planets in that location and displays stock photos and additional information. The concept also works in reverse: StellarWindow has a voice recognition system, so users can speak the name of a star, constellation, or planet, and the software will tell you how to point your computer in the right direction. StellarWindow is being released by Fairy Devices, Inc., a Japanese start-up company created by a group of students from Waseda University. Fairy Devices plans to release the software by the end of 2008 for about 26 Yen ($250).More information: Fairy Device Product Pagevia: Hobby Media last_img read more

WowWee Rovio WiFi Webcam A Consumer Wunderkind

first_img This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. ( — WowWee introduces the first consumer WiFi roving Webcam for being there without having to go there. The WowWee Rovio is PC/Mac compatible and measures approximately 13.5-inches x 12-inches x 14-inches and weighs only 5-pounds. Rovio’s three omni-directional wheel design allows it to roam around an office area, home or small manufacturing area maneuvering its way around pets, furniture and obstructions without tipping over. Rovio connects up to a users WiFi network via a laptop, game console or wireless cell phone. The technical requirements include Explorer 6 or higher, Mozilla FireFox 1.5 or higher, Safari 3.0 or higher, Safari Mobile or Opera Mobile. A high-speed Internet connection, 802.11b/g, a USB port and a wireless access point device. The rechargeable battery pack is included and extra charging stations are available for docking Rovio in various rooms. The WowWee Rovio may not be for people short on patience while it grows up, but Wee Willie Winkie and fun loving gadget fans will love it.© 2009 Rovio’s TrueTrack beacon guides it back home to its charger station so you never have to worry about it running out of battery life. The Rovio comes with a CD interactive set up guide, a USB cable, a charger station and an AC power adapter. Once the device is connected to the wireless network the set up guide does the rest. To set up an external network a user needs to set up a port forwarding to your router by following the steps in the written guidelines on-line. A recent firmware update may in time clear up connectivity issues cited by some reviewers. WowWee Roviocenter_img Explore further Citation: WowWee Rovio WiFi Webcam A Consumer Wunderkind (2009, January 6) retrieved 18 August 2019 from SPR Therapeutics’ neuromodulation system treats phantom-limb painlast_img read more

Adaptive headlamp system introduced

first_img This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Valeo’s BeamAtic Premium system High beam provides the most efficient lighting for vehicles, but every time another vehicle approaches or is in front, the lights must be switched to low beam. In the Valeo BeamAtic Premium system, which is also called an “Adaptive Driving Beam,” switching between high and low beams is unnecessary because the lighting is automatically adjusted in response to another vehicle being detected by an onboard camera.In the high beam position, maximum light is maintained everywhere except in zones occupied by other vehicles either in front or approaching. Each headlamp produces a cone of light, but when a vehicle is detected a mobile shield is brought into position to block off some of the light. The onboard camera is equipped with powerful image processing software that enables it to track the trajectory of the other vehicle and ensure the other driver is not dazzled by the high beam.Drivers in other vehicles see what appears to be low beam light, while the driver in the car fitted with BeamAtic sees what appears to be normal high beam light, with the road fully illuminated. © 2010 Honda Develops New Multi-View Vehicle Camera System to Provide View of Surrounding Areas The system includes a “Tourism” feature that adapts the lighting for driving in countries in which traffic travels on the opposite side of the road. Valeo says the new system will improve driver safety since objects at the sides of the road are more clearly visible. The system won the International Technical Innovation Grand Prix at the international trade show, Equip Auto, in September 2009. The first BeamAtic Premium headlight systems will be available from lamp manufacturer Ichikoh Industries in Japan from this month, and Valeo is negotiating with Japanese car makers for the system to be included in future vehicles.The Valeo Group is one of the world’s top automotive suppliers, producing a range of vehicle components. It has branches in 27 countries and employs 56,000 people. Explore further Citation: Adaptive headlamp system introduced (2010, September 2) retrieved 18 August 2019 from ( — The independent industrial group Valeo, which is headquartered in France, has introduced a “BeamAtic” adaptive headlight system that enables drivers to keep their lights on high beam without dazzling other drivers.last_img read more

New explanation for Hawaiian hot spot

first_imgHawaii Volcanoes National Park. Image: Hawaii’s volcanoes have puzzled scientists for decades because the islands lie in the middle of a tectonic plate rather than at the edge, where volcanic activity would be expected. Until now the prevailing theory has been the mantle plume theory, which suggested the volcanism was fed by a hot plume rising from the Earth’s mantle, but so far efforts to detect a hot plume seismically have remained inconclusive.The mantle plume theory was developed by US scientist Jason Morgan in 1971 and suggests the tectonic plate is sliding above a stationary plume of molten rock lying deep within the mantle, with upwellings of lava forming undersea volcanoes that eventually grew upwards to become islands. As the tectonic plate continued to move the volcanoes were extinguished and some of the islands eroded and dropped below sea level. The result was the Hawaiian-Emperor seamount chain, which stretches from the Aleutian Trench in the northwest to the present-day Hawaiian islands in the southeast.The new research, by a team led by seismologist Dr Robert van der Hilst of the Massachusetts Institute of Technology (MIT) imaged the scattering of seismic waves from discontinuities in the mantle to try to identify plumes and other subterranean structures. Discontinuities are formed when the rocks in the mantle are squeezed together at such high pressures that they abruptly reorganize themselves. They included data from almost 170,000 reflected seismic signals along with seismic data from around 4,800 earthquakes in the Pacific region.The next step in the research was to use computer models of the behavior of a variety of minerals at different temperatures and pressures to predict the temperature of the regions beneath the Earth’s surface that reflect the seismic waves. The results suggested a shallow 800-to 2,000-kilometer-wide “thermal anomaly” exists near the top of the lower mantle around 720 kilometers beneath the surface to the west of Hawaii. This suggests that the mantle plume theory might be wrong, since the findings do not support hot material rising as a narrow vertical plume.According to van der Hilst, the current volcanic activity might be fuelled instead by molten rocks bubbling upwards from the eastern edge of the pool of trapped materials “like a lava lamp” rather than a mantle plume. Other scientists have some misgivings, with Thorne Lay of the University of California pointing out that some of the data selected for the analysis were not clean enough since 170,000 good quality waveforms do not exist, and using noisy data could have introduced errors.Dr van der Hilst agreed the team used data other seismologists could discard as too noisy, but said they were able to “exploit the noise reduction of very large data sets.” He also said that carefully selecting data could produce bias in the results.The paper was published in the journal Science on May 27. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. New Technology Allows Geophysicist To Test Theory About Formation of Hawaii (w/ Podcast) ( — Scientists in the US have suggested that volcanic activity in Hawaii could be fed by a giant hot rock pool 1,000 kilometers west of the islands and in the Earth’s mantle, rather than being fed by a hot plume of magma as previously thought. More information: Seismic Imaging of Transition Zone Discontinuities Suggests Hot Mantle West of Hawaii, Science 27 May 2011: Vol. 332 no. 6033 pp. 1068-1071 DOI: 10.1126/science.1202731ABSTRACTThe Hawaiian hotspot is often attributed to hot material rising from depth in the mantle, but efforts to detect a thermal plume seismically have been inconclusive. To investigate pertinent thermal anomalies, we imaged with inverse scattering of SS waves the depths to seismic discontinuities below the Central Pacific, which we explain with olivine and garnet transitions in a pyrolitic mantle. The presence of an 800- to 2000-kilometer-wide thermal anomaly (ΔTmax ~300 to 400 kelvin) deep in the transition zone west of Hawaii suggests that hot material does not rise from the lower mantle through a narrow vertical plume but accumulates near the base of the transition zone before being entrained in flow toward Hawaii and, perhaps, other islands. This implies that geochemical trends in Hawaiian lavas cannot constrain lower mantle domains directly. Explore further Citation: New explanation for Hawaiian hot spot (2011, May 27) retrieved 18 August 2019 from © 2010 PhysOrg.comlast_img read more

Charlotte robot tells the world where its not going

first_img( —A resourceful thinker who likes to learn as he goes, Kevin Ochs started out on a project with the intention of brushing up skills in C++ programming. He has come up with something quite interesting as a result. He has a six-legged robot that talks about its progress while navigating obstacles. “This robot project was a mental exercise for me,” he said, on “My Raspberry Pi Robot Called Charlotte,” his Web page. “It had been several years since I had done anything with C++ and I needed to shore up that skill set.” The distinctive edge to Charlotte is that it not only can avoid obstacles by moving out and away from them but can talk about its navigations with the added twist of an open source speech synthesizer, eSpeak. According to the eSpeak site, this is a compact software speech synthesizer for Linux or Windows, which uses a “formant synthesis” method. This allows many languages to be provided in a small size. The speech is clear, but has limitation in that it is not as natural or smooth as larger synthesizers based on human speech recordings. © 2013 Explore further The Ochs creation, Charlotte, moves about and talks, a design making use of a kit then custom-fashioned by Ochs. First, he turned to the robot shop Trossen Robotics, a business with an ample variety of robot kits and parts. Ochs chose a hexapod robot kit. “I found a robot kit sold by Trossen Robotics that visually seemed interesting and was powered by a Arduino-based controller. I purchased the kit and began learning how it was controlled with the stock code they provided.” He then proceeded to make modifications. Mainly, he gave it a “brain” in the form of an overclocked Raspberry Pi computer. (“To note the Rpi is overclocked to 1000Mhz,” Ochs said.) The Raspberry Pi resides in between the Trossen-supplied body. Ochs also applied custom C++ coding with the aid of the Raspberry Pi, his own code based on or inspired by what was done in Phoenix code. He also used openNI (defined as the standard framework for 3-D sensing) and openCV to develop a heads up display and collision detection; openCV stands for Open Source computer vision. It provides a computer vision and machine learning software library, with over 2500 optimized algorithms. Citation: Charlotte robot tells the world where it’s not going (2013, June 3) retrieved 18 August 2019 from More information: This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Kondo Robot releases a hexapod robot kit (w/ video)last_img read more

Astrophysicists duo propose Planck star as core of black holes

first_img More information: Planck stars, arXiv:1401.6562 [gr-qc] star that collapses gravitationally can reach a further stage of its life, where quantum-gravitational pressure counteracts weight. The duration of this stage is very short in the star proper time, yielding a bounce, but extremely long seen from the outside, because of the huge gravitational time dilation. Since the onset of quantum-gravitational effects is governed by energy density —-not by size—- the star can be much larger than planckian in this phase. The object emerging at the end of the Hawking evaporation of a black hole can then be larger than planckian by a factor (m/mP)n, where m is the mass fallen into the hole, mP is the Planck mass, and n is positive. We consider arguments for n=1/3 and for n=1. There is no causality violation or faster-than-light propagation. The existence of these objects alleviates the black-hole information paradox. More interestingly, these objects could have astrophysical and cosmological interest: they produce a detectable signal, of quantum gravitational origin, around the 10−14cm wavelength. Explore further © 2014 Journal information: arXiv The current thinking regarding black holes is that they have two very simple parts, an event horizon and a singularity. Because a probe cannot be sent inside a black hole to see what is truly going on, researchers have to rely on theories. The singularity theory suffers from what has come to be known as the “information paradox”—black holes appear to destroy information, which would seem to violate the rules of general relativity, because they follow rules of quantum mechanics instead. This paradox has left deep thinking physicists such as Stephen Hawking uneasy—so much so that he and others have begun offering alternatives or amendments to existing theories. In this new effort, a pair of physicists suggest the idea of a Planck star.The idea of a Planck star has its origins with an argument to the Big Bang theory—this other idea holds that when the inevitable Big Crunch comes, instead of forming a singularity, something just a little more tangible will result—something on the Planck scale. And when that happens, a bounce will occur, causing the universe to expand again, and then to collapse again and so on forever back and forth.Rovelli and Vidotto wonder why this couldn’t be the case with black holes as well—instead of a singularity at its center, there could be a Planck structure—a star—which would allow for general relativity to come back into play. If this were the case, then a black hole could slowly over time lose mass due to Hawking Radiation—as the black hole contracted, the Planck star inside would grow bigger as information was absorbed. Eventually, the star would meet the event horizon and the black hole would dematerialize in an instant as all the information it had ever sucked in was cast out into the universe.This new idea by Rovelli and Vidotto will undoubtedly undergo close scrutiny in the astrophysicist community likely culminating in debate amongst those who find the idea of a Planck star an answer to the information paradox and those who find the entire idea implausible. ( —Two astrophysics, Carlo Rovelli and Francesca Vidotto, have uploaded a paper to the preprint server arXiv in which they suggest that a structure known as a Planck star exists at the center of black holes, rather than a singularity. This would suggest, they note, that black holes at some point return all the information they have pulled in, to the universe.center_img Theorists apply loop quantum gravity theory to black hole Citation: Astrophysicists duo propose Planck star as core of black holes (2014, February 14) retrieved 18 August 2019 from This artist’s concept depicts a supermassive black hole at the center of a galaxy. The blue color here represents radiation pouring out from material very close to the black hole. The grayish structure surrounding the black hole, called a torus, is made up of gas and dust. Credit: NASA/JPL-Caltech This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.last_img read more

Quantum fingerprinting surpasses classical limit

first_img This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. Explore further Ever since quantum fingerprinting was first proposed in 2001, it has for the most part remained an interesting theoretical concept, with only a handful of protocols having managed to experimentally demonstrate the idea. Now in a new study, researchers have experimentally demonstrated a quantum fingerprinting protocol and shown that it can surpass the classical limit for solving communication complexity problems. In these problems, two parties each have a message, and they both share some of their message with a referee, who has to decide whether the two messages are the same or not. The classical limit requires that a minimum amount of information must be transmitted between each party and the referee in order for the referee to make this decision.So far, the best communication complexity protocols have required transmitting an amount of data that is two orders of magnitude larger than the classical limit.Now in the new study, the scientists showed that quantum fingerprinting can transmit less information than that required by the classical limit, in some cases up to 84% less, by transmitting only the tiny amount of information that is contained in a quantum fingerprint. The results set a new record for transmitting the smallest amount of information for any type of communication complexity protocol. “For the first time, we have demonstrated the quantum advantage over classic information processing in communication complexity,” coauthor Qiang Zhang, a physicist at the University of Science and Technology of China and the Jinan Institute of Quantum Technology, told (—As the saying goes, no two fingerprints are alike, and the same is true for quantum fingerprints. Just as a human fingerprint is only a fraction of the size of a person, yet can be used to distinguish between any two people (at least in theory), quantum fingerprints are exponentially smaller than the string of information they represent, yet they can be used to distinguish between any two strings. The achievement could lead to a wide variety of applications in quantum communications, in particular the potential for the development of “green” (low-energy) communication methods. The results could also lead to new tests of the foundations of quantum physics, since quantum fingerprinting involves quantum phenomena such as nonlocality, which is related to quantum entanglement. Demonstrating the potential for applications, the researchers used the new protocol to transmit 2-Gbit video files over a 20-km fiber. By transmitting only the information contained in the files’ quantum fingerprints, this task requires transmitting only about 1300 photons, which is 14% less information than that required by the classical limit.However, the researchers note that the new protocol cannot be used for any real-world application—including sending video files—in its current form, since it still needs improvement in several areas. One drawback is that the new protocol takes more time to run than classical protocols, even though it uses less energy overall. Also, the number of transmitted photons required increases as the channel distance increases, so the quantum advantage diminishes over longer distances. The researchers plan to address these drawbacks in future work.”Although our setup utilizes less information compared to the classical limit, it takes more time and more channel resources,” Zhang said. “So we cannot find its application in its current status. We may try to improve the transmission time by using multiplexing. But we do not whether it will be useful.” The key to experimentally realizing the quantum fingerprinting protocol is the ability to distinguish between any two strings of quantum information just by knowing their quantum fingerprints. To do this, the researchers transmitted the quantum fingerprints in the form of single-photon pulses to two detectors. If the two fingerprints/pulses are different, both detectors click; if they’re identical, only one of the detectors clicks. Journal information: Physical Review Letters More information: Jian-Yu Guan et al. “Observation of Quantum Fingerprinting Beating the Classical Limit.” Physical Review Letters. DOI: 10.1103/PhysRevLett.116.240502 Citation: Quantum fingerprinting surpasses classical limit (2016, July 5) retrieved 18 August 2019 from © 2016 (Left) Comparison of the amount of information transmitted by the best classical protocol, the quantum fingerprinting protocol (black points represent experimental results at 0 km, red points at 20 km), and the classical limit. For large messages, the quantum fingerprinting protocol surpasses the classical limit. (Right) This graph shows that, as distance decreases and data size increases, the advantage of quantum fingerprinting increases. The maximum advantage is 84% less information than the classical limit. Credit: Guan et al. ©2016 American Physical Society Illustration of the quantum fingerprinting protocol, which can transmit less information than the minimum required by the classical limit for solving a communication complexity problem. Credit: Guan et al. ©2016 American Physical Society Russian scientists make teleportation a ‘two-way road’ using the same quantum resourcelast_img read more

Taking statistics to the quantum domain

first_img More information: Gael Sentís et al. “Quantum Change Point.” Physical Review Letters. DOI: 10.1103/PhysRevLett.117.150502 Also at arXiv:1605.01916 [quant-ph] Although the local measurement method sounds appealing because it can potentially detect the change point as soon as it occurs without waiting for all of the particles to be emitted, the researchers found that global measurements outperform even the best local measurement strategies.The “catch” is that global measurements are more difficult to experimentally realize and require a quantum memory to store the quantum states as they arrive at the detector one by one. The local measurement methods don’t require a quantum memory, and instead can be implemented using much simpler devices in sequence. Since global detection requires a quantum memory, the results show that change point detection is another of the many problems for which quantum methods outperform all classical ones.”We expected that global measurements would help, as coherent quantum operations tend to exploit genuinely quantum resources and generally outperform local operations in many information processing tasks,” Sentis said. “However, this is a case-dependent advantage, and sometimes sophisticated and clever local strategies are enough to cover the gap. The fact that here there is a finite performance gap says something fundamental about change point detection in quantum scenarios.”The results have potential applications in any situation that involves analyzing data collected over time. Change point detection is also often used to divide a data sample into subsamples that can then be analyzed individually. “The ability to accurately detect quantum change points has immediate impact on any process that requires careful control of quantum information,” Sentis said. “It can be considered a quality testing device for any information processing task that requires (or produces) a sequence of identical quantum states. Applications may range from probing quantum optical fibers to boundary detection in solid state systems.”In the future, the researchers plan on exploring the many applications of quantum change point detection.”We plan on extending our theoretical methods to deal with more realistic scenarios,” Sentis said. “The possibilities are countless. A few examples of generalizations we are exploring are multiple change points, noisy quantum states, and detection of change points in optical setups.” Journal information: Physical Review Letters Physicists retrieve ‘lost’ information from quantum measurements In the quantum change point problem, a quantum source emits particles that are received by a detector. At some unknown point, a change occurs in the state of the particles being emitted. Physicists have found that global measurement methods, which use quantum repeaters, outperform all classical measurement methods for accurately identifying when the change occurred. Credit: Sentis et al. ©2016 American Physical Society Now in a new paper published in Physical Review Letters, physicists Gael Sentís et al. have taken the change point problem to the quantum domain. “Our work sets an important landmark in quantum information theory by porting a fundamental tool of classical statistical analysis into a fully quantum setup,” Sentis, at the University of the Basque Country in Bilbao, Spain, told “With an ever-growing number of promising applications of quantum technologies in all sorts of data processing, building a quantum statistical toolbox capable of dealing with real-world practical issues, of which change point detection is a prominent example, will be crucial. In our paper, we demonstrate the working principles of quantum change point detection and facilitate the grounds for further research on change points in applied scenarios.”Although change point problems can deal with very complex situations, they can also be understood with the simple example of playing a game of Heads or Tails. This game begins with a fair coin, but at some unknown point in the game the coin is switched with a biased one. By statistically analyzing the results of each coin toss from the beginning, it’s possible to determine the most likely point at which the coin was switched.Extending this problem to the quantum realm, the physicists looked at a quantum device that emits particles in a certain state, but at some unknown point the source begins to emit particles in a different state. Here the quantum change point problem can be understood as a problem of quantum state discrimination, since determining when the change in the source occurred is the same as distinguishing among all possible sequences of quantum states of the emitted particles.Physicists can determine the change point in this situation in two different ways: either by measuring the state of each particle as soon as it arrives at the detector (a “local measurement”), or by waiting until all of the particles have reached the detector and making a measurement at the very end (a “global measurement”). (—The change point problem is a concept in statistics that pops up in a wide variety of real-world situations, from stock markets to protein folding. The idea is to detect the exact point at which a sudden change has occurred, which could indicate, for example, the trigger of a financial crisis or a misfolded protein step. Explore further Citation: Taking statistics to the quantum domain (2016, November 9) retrieved 18 August 2019 from © 2016 This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.last_img read more

Evidence of ancient weathering from acid rain may explain melting of snowball

first_img(—A team of researchers from China and the U.S. has found evidence of ancient weathering in a glacial deposit in China’s Hunan province. In their paper published in Proceedings of the National Academy of Sciences, the team outlines their findings and why they believe ancient weathering offers evidence of acid rain that might have played a role in the development of more advanced life forms on our planet. More information: Kang-Jun Huang et al. Episode of intense chemical weathering during the termination of the 635 Ma Marinoan glaciation, Proceedings of the National Academy of Sciences (2016). DOI: 10.1073/pnas.1607712113AbstractCryogenian (∼720–635 Ma) global glaciations (the snowball Earth) represent the most extreme ice ages in Earth’s history. The termination of these snowball Earth glaciations is marked by the global precipitation of cap carbonates, which are interpreted to have been driven by intense chemical weathering on continents. However, direct geochemical evidence for the intense chemical weathering in the aftermath of snowball glaciations is lacking. Here, we report Mg isotopic data from the terminal Cryogenian or Marinoan-age Nantuo Formation and the overlying cap carbonate of the basal Doushantuo Formation in South China. A positive excursion of extremely high δ26Mg values (+0.56 to +0.95)—indicative of an episode of intense chemical weathering—occurs in the top Nantuo Formation, whereas the siliciclastic component of the overlying Doushantuo cap carbonate has significantly lower δ26Mg values (<+0.40), suggesting moderate to low intensity of chemical weathering during cap carbonate deposition. These observations suggest that cap carbonate deposition postdates the climax of chemical weathering, probably because of the suppression of carbonate precipitation in an acidified ocean when atmospheric CO2 concentration was high. Cap carbonate deposition did not occur until chemical weathering had consumed substantial amounts of atmospheric CO2 and accumulated high levels of oceanic alkalinity. Our finding confirms intense chemical weathering at the onset of deglaciation but indicates that the maximum weathering predated cap carbonate deposition. Scientists believe that planet Earth was covered from pole to pole in ice at least twice in its long history. The most recent "snowball" event is believed to have occurred from approximately 635 to 650 million years ago. Such an event would obviously have marked a very cold period in Earth's history, but it has also led planet scientists to wonder what might have occurred to melt the snowball. One theory suggests that even as the surface of the planet was frozen, there were still factors that caused a massive amount of greenhouse gases to build up in the atmosphere. Such a buildup would have trapped heat, eventually reaching a point at which surface ice would have melted. That amount of greenhouse gases, particularly carbon dioxide, would have also led to acid rain, which would have caused weathering on exposed rock after the ice covering melted. But scientists had not found any evidence of such weathering. Now, researchers with this new effort report that they found evidence of weathering in rocks gathered high on a mountain above a glacier. After obtaining samples and studying their magnesium isotopes, the researchers concluded that the rocks had been subjected to intense weathering due to exposure to chemicals consistent with acid rain—during a period in time at the end of the last snowball period.The researchers suggest acid rain might have been falling from the skies for hundreds of thousands of years, contributing, at least in part, to the Cambrian explosion, which occurred approximately 541 million years ago. Their thinking is that runoff from rock weathering due to the acid rain would have made its way to the world's oceans leading to the formation of cap carbonate on the floor, which they believe might have paved the way for the development of more complex life forms. Rock weathering may have led to 'Snowball Earth' A composite image of the Western hemisphere of the Earth. Credit: NASA Journal information: Proceedings of the National Academy of Sciencescenter_img Explore further This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. © 2016 Citation: Evidence of ancient weathering from acid rain may explain melting of snowball Earth (2016, December 13) retrieved 18 August 2019 from read more

Physicists experimentally verify 40yearold fluid equations

first_imgFor decades, researchers have been using equations derived in the mid-1970s for a variety of fluid applications involving inks, foams, and bubbles, among other uses. These fundamental fluid equations describe how much force is required to pull a solid particle from a liquid surface. Although these equations have been experimentally confirmed for millimeter-sized particles, experimental confirmation in the micrometer regime has been lacking. Change in contact angle when detaching a microparticle from a liquid surface. Credit: Schellenberget et al. ©2018 American Physical Society Journal information: Physical Review Letters © 2018 Aiming to fill this gap, researchers in a new study have, for the first time, simultaneously measured the capillary force on a single microparticle while imaging the shape of the liquid meniscus that forms underneath the particle, which tries to pull the particle back to the liquid. Their results experimentally verify the 1970s fluid equations for microparticles.The researchers, led by Hans-Jürgen Butt at the Max Planck Institute for Polymer Research in Mainz, Germany, have published a paper on their experimental results in a recent issue of Physical Review Letters.Particle behavior on liquid surfaces has many applications, and has been widely studied, at least for particles down to about 0.3 millimeters in diameter. As one of these macroparticles is pulled out of the liquid, a meniscus forms between the particle and liquid surface, creating a capillary force that tries to pull the particle back to the surface. At macroscopic scales, gravity also plays a significant role in pulling a particle back down to the liquid surface. Experiments have also shown that, as a macroparticle is being pulled up from the surface, it can slide around on the surface with the meniscus sliding underneath.Things are different, however, at the microscale, where gravity is usually negligible compared to capillary forces. Since there have not been as many experiments with microparticles, many questions remain unanswered. Some of these open questions include determining how the capillary force and the shape of the meniscus are related, as well as whether microparticles can slide on the surface like macroscopic particles can.In their experiments, the researchers glued a glass microparticle to the end of a cantilever on a microscope, and then slowly immersed the cantilever with the microparticle into a container of glycerol. Using advanced microscope techniques, the researchers simultaneously measured the capillary force and the contact angle between the microparticle and the meniscus as the microparticle was slowly lifted out of the fluid. While the experimental results verified the fundamental fluid equations for microparticles in general, they also revealed some surprises. For instance, unlike for macroscopic particles, the contact line between a microparticle and the liquid surface is pinned down for most of the detaching process. Only in the final moments when the microparticle is about to detach does the particle slide around on the surface.The researchers expect that the results will be useful for the many applications that involve capillary forces of microparticles at liquid surfaces, such as mineral flotation and the deinking of paper. More information: Frank Schellenberger et al. “Detaching Microparticles from a Liquid Surface.” Physical Review Letters. DOI: 10.1103/PhysRevLett.121.048002center_img Explore further Citation: Physicists experimentally verify 40-year-old fluid equations (2018, August 27) retrieved 18 August 2019 from Aboard the ISS, researchers investigate complex dust behavior in plasmas This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.last_img read more