Why aren’t women better represented in science?
This article from Stanford University Neurobiologist Ben Barres addresses an ugly controversy in science … and does so well enough that I’m going to get out of the way.
Last year, Harvard University president Larry Summers suggested that differences in innate aptitude rather than discrimination were more likely to be to blame for the failure of women to advance in scientific careers. Harvard professor Steven Pinker then put forth a similar argument in an online debate, and an almost identical view was elaborated in a 2006 essay by Peter Lawrence entitled 'Men, Women and Ghosts in Science'. Whereas Summers prefaced his statements by saying he was trying to be provocative, Lawrence did not. Whereas Summers talked about “different availability of aptitude at the high end,” Lawrence talked about average aptitudes differing. Lawrence argued that, even in a utopian world free of bias, women would still be under-represented in science because they are innately different from men. …
I will refer to this view — that women are not advancing because of innate inability rather than because of bias or other factors — as the Larry Summers Hypothesis. It is a view that seems to have resonated widely with male, but not female, scientists. Here, I will argue that available scientific data do not provide credible support for the hypothesis but instead support an alternative one: that women are not advancing because of discrimination.
Barres includes a pair of sentences in his article that I’m not just going to feature here … I’m going to save them. I’m going to keep them ready, because they’ve been needed often, and they’ll be needed again.
I am suspicious when those who are at an advantage proclaim that a disadvantaged group of people is innately less able. Historically, claims that disadvantaged groups are innately inferior have been based on junk science and intolerance.
That’s far from the only valuable statement in this powerful, evidence-based, fist-to-the-eye of those who have power and think they deserve it. Bookmark this one.
Then come inside. There’s more to read.
Eight in one cancer test.
A very large group of researchers, primarily from Johns Hopkins, have completed a large scale study of a test that sets out to detect not one type of cancer, but eight: ovarian, liver, stomach, pancreatic, esophageal, colorectal, lung or breast.
CancerSEEK tests were positive in a median of 70% of the eight cancer types. The sensitivities ranged from 69% to 98% for the detection of five cancer types (ovary, liver, stomach, pancreas, and esophagus) for which there are no screening tests available for average-risk individuals. The specificity of CancerSEEK was > 99%: only 7 of 812 healthy controls scored positive. In addition, CancerSEEK localized the cancer to a small number of anatomic sites in a median of 83% of the patients.0
That’s not bad at all. Especially as these were all people in early stages of their disease with cancer limited to one area. There is some bad news hidden below the headlines — breast cancer detection rates were only 33 percent. At the far end of the scale, ovarian cancer detection was 98 percent. In fact, for the five cancers listed in the abstract—ovary, liver, stomach, pancreas, and esophagus—the test seems as good or better than any available existing individual test. For those cancers,
By way of comparison, screening mammograms missed about 20% of breast cancers present at the time of screening. Tests for other cancers, like the standard PSA test for prostate cancer, have a similar rate of false-negatives.
Even better, the rate of false positives for CancerSEEK was apparently very low (less than one percent) and in many cases CancerSEEK was able to localize the cancer. Especially for later state cancers, where it was able to locate 78 percent of cancers. For a blood test, that’s pretty amazing.
While so-called ‘liquid biopsies’ have been growing in number over the last few years, many of them are either costly or time-consuming to test. CancerSEEK seems to get around both these problems, returning a prompt and apparently cost-effective result. The ability to detect multiple cancers from a single test is a big bonus, and even where the false-negative rate is high — like breast cancer — the false positive rate is low enough that it’s certainly worth making the check, so long as everyone understands.
Still, all those great statistics for CancerSEEK came in tests against patients whose cancer had already been diagnosed. To find out how it actually performs at screening cancers in the general population, the next stage is a test with 10,000 apparently healthy people observed over a period of five years. Which would seem to indicate that it’s going to be some time before CancerSEEK becomes part of your annual checkup.
Getting climate change estimates about as good as we can get them
A trio of UK scientists looked at historical climate data more with an eye to variability than to determining the actual amount of change. For any estimate based off historical data, there’s going to be a definitive limit set by the availability and precision of that data. So this group worked to define those limits and came up with a better sense of what the temperature change will be if we hit the Paris CO2 target. This isn’t a model of the climate. It’s not a prediction of where CO2 will be in some number of years. It’s simply an attempt to say “if CO2 moves to X, then temperature goes up by Y.” The Intergovernmental Panel on Climate Change had pinned that number at 1.5–4.5 degrees. As it turns out …
Here we present a new emergent constraint on ECS that yields a central estimate of 2.8 degrees Celsius with 66 per cent confidence limits (equivalent to the IPCC ‘likely’ range) of 2.2–3.4 degrees Celsius.
Which isn’t great, and suggests that even if the targets are hit, the climate will still warm more than two degrees. But at least this is one paper that didn’t end with “uh oh, it’s worse than we thought.”
A material that’s better at converting power to light than it should be
Perovskite is a mineral that ranges from yellow-orange to black that is found, among other places, near Magnet Cove in Arkansas. But it’s also the name of a whole group of materials. The mineral? Calcium titanium oxide. The star of today’s news? Cesium lead halide. If that sounds like a completely different chemistry, that’s because it is. But the two perovskites share a crystal structure. And when lead halides meets the perovskites structure something rather unexpected happens.
Lead halide perovskites seem to dispose of all conventional wisdom in materials science. Like organic semiconductors, they are relatively easy to fabricate, and their bandgap (a property that determines their conductivity and optical properties) can be tuned by varying their composition. Yet, like thin-layer (epitaxial) inorganic semiconductors, they are highly crystalline and exhibit efficient charge transport. It is as if their properties were selected from a materials scientist’s wish list, combining the best aspects of organic molecules, nanocrystals and epitaxial inorganic semiconductors.
When a photon smacks into these perovskites they toss off something called an “exciton.” An exciton is an electron plus a missing electron. Seriously. It’s an electron paired with a spot where an electron should be, but isn’t. Which is one of those things that makes people like me, who spent their critical school years studying capital-P Perovskite rather than the lower case perovskites, shrug and give out an “If you say so.” There are a couple of different versions of this electron plus electron-hole pairing, one called a singlet, the other a triplet. Triplets are usually weak. But with with these particular perovskites, they’re not.
Research into hybrid perovskites has been fuelled in the past few years by the successful incorporation of these materials into solar cells. Such devices can now convert more than 22% of the energy received from sunlight into electricity7, which is a record for perovskite solar cells.
The discovery that cesium lead halides can pop out bright exiton triplets, suggests that members of this crystal family might have even better solar cell potential — if some manufacturing difficulties can be overcome. It also gives a hint that there might be a way to generate bright triplets with other materials.
And hey, excitons are electrically neutral quasiparticles. I bring this up only so I get to use the word “quasiparticles.” Keep that in mind for your next Star Trek fan film.
Climate change likely to be … yes … worse than the targets.
While the UK group was looking at how we make projections based on past data, they were making the assumption we hit the targets for CO2. But a pair of investogators at Stanford looked at how the world is doing so far in reducing fossil fuel use and and suggests we’re going to miss those targets. They come up with a temperature for 2100 that’s about 0.5 degree C above previous projections.
Again, this isn’t a new model or some new means of estimating possible climate change. It’s just taking the existing numbers and plugging in improved data.
‘Never mind’ about that waste gas to gasoline conversion
Last August, a group of scientists from the University of Texas at Arlington, published a paper on how they were able to take CO2, mix it with steam, and pass it over a heated cobalt plate in a pressure vessel to generate complex hydrocarbons.
An efficient solar process for the one-step conversion of CO2 and H2O to C5+ liquid hydrocarbons and O2 would revolutionize how solar fuel replacements for gasoline, jet, and diesel solar fuels could be produced and could lead to a carbon-neutral fuel cycle.
But this week, they had to make the kind of admission that no scientist wants to make.
The authors wish to note the following: “We have now demonstrated that our results in the above work are largely due to artifacts and that the underlying thesis of this work has not been shown.”
Ouch. And … good for them. This is exactly how science is supposed to work. This kind of issue, where very small measurements turn out to be inaccurate and results are generated through what amounts to either “noise” in the system or an issue with how the measurements are done, is extremely common when looking at experiments where the results are tiny—think of all those cold fusion attempts and how hard it was to figure out if there was ‘excess’ heat in a system that was far from closed. That these guys were willing to come back and drive a stake through their own work is a good thing. Even if it had to be painful.
How society deals with the “old” and with the “oldest old.”
A 14 member group of scientists, public health expects, and economists looked at the aging population and in particular the rapid growth of the “oldest old” — people above 80. In particular, they looked at how differences in societies affected the experience of these older people.
And they put a number to it.
An important first step is to carefully measure how well a society provides a context that facilitates successful aging. Our newly devised, comprehensive Aging Society Index, which measures societal adaptation to aging, is an important first step in this direction.
The United States generally had a lot going for it when it came to an elderly population, except that America’s older folk felt pretty darn antsy. Why did the United States in particular display “high levels of insecurity?” It turns out to be, not too shockingly, because older people were constantly worried about whether Social Security and Medicare were going to be ended, reduced, altered, or made too difficult. Why did they have that worry? Because people were constantly making attacks on Social Security and Medicare.
The end result is a score that’s not bad for the US … but does show why we get so few of those desirable immigrants from Norway.
Looking for criminal brains.
A trio of authors from the US and Germany went searching for the neurology of crime.
… the finding that focal lesions of the ventromedial prefrontal cortex could lead to immoral and even criminal behavior generated considerable surprise and interest. While a number of rare cases have now been described in whom a focal lesion caused criminality, these are neither very consistent (the lesions occur in several different anatomical locations) nor at all reliable (only a small fraction of patients, for any lesion location, show criminal behavior).
There’s little doubt that damage to the brain can cause people to evince new behaviors, including behaviors that break with societal norms. This article, which looked at the effect of lesions from a neural network standpoint, built off an earlier publication that look at people who displayed criminal activity after, but not before, they were known to have suffered brain damage.
Following brain lesions, previously normal patients sometimes exhibit criminal behavior. Although rare, these cases can lend unique insight into the neurobiological substrate of criminality. Here we present a systematic mapping of lesions with known temporal association to criminal behavior, identifying 17 lesion cases.
All of which makes me decidedly uneasy. There’s no doubt that brain damage can affect decision making skills and ability to conform to social expectations. But these studies are looking at such small numbers of people, and they’re starting with people who have these lesions and are known to have committed crime. Do 80 percent of people with such damage engage in criminality? 8 percent? 0.08 percent? Understanding that all behavior is ultimately a product of processes in the brain and not a mythical “moral fiber” is important, and looking at these issues for their potential in diagnosing an issue and possibly offering treatment is also valuable. But the idea of using them as a prediction of future behavior is more than a little frightening.
Nano-Origami graphene paper for the win!
I’m not sure it would be possible to pack an idea with more nano-punk, diamond age, just plain coolness than this one — machines created by folding 1-atom thick sheets of carbon graphene into nano-scale origami structures that can be set loose like cell-sized robots.
The resulting machines are freely deployed in solutions, can change shape in fractions of a second, carry loads large enough to support embedded electronics, and can be fabricated en masse. This work opens the door to a generation of small machines for sensing, robotics, energy harvesting, and interacting with biological systems on the cellular level.
This group from Cornell is sure to set loose a thousand competing visions of nanobot heaven and grey goo hell with their tiny, tiny biomorphs. What’s particularly exciting about their process is that, while some nanomachines are dependent on the tedious construction of atom-by-atom gears and motors, with self-assembly proving to be a challenge, these devices are essentially “printed” on graphene sheets bonded to a nanometer-thick layer of glass. The result is activators that can be flexed quickly, produced in large numbers at a pass, and arranged to create more complex forms. They also seem strong enough to support connecting them to similar scale electronics to give the origamibots some skills.
Although the graphene bimorphs are only nanometers thick, they can lift these panels, the weight equivalent of a 500-nm-thick silicon chip. Using panels and bimorphs, we can scale down existing origami patterns to produce a wide range of machines.
I’m laying odds that someone has folded a cell-sized unicorn. Because of course they would.
Bird skulls, nothing but bird skulls
This study by a pair from University College London was done in a way that Darwin would recognize—looking at the plain morphology of many, many (many) bird skulls. Pretty much all the birds. But they did it not just by pulling out a hand lens and a some calipers. They went with laser scanners and modeling software.
The focus here was on looking at how these skulls displayed features of “mosaic evolution.” Mosaic evolution reflects how some parts of the anatomy may be affected by an evolutionary change, but other portions are retained as they were. In other words, not everything in the body changes at the same time or at the same rate.
Appropriately enough, the most famous example of mosaic evolution dates back to Darwin’s time, when Thomas Huxley made a comparison between a small theropod dinosaur with early bird, Archaeopteryx. Huxley showed that most of the anatomy of the two was extremely similar. It was just the elongated hands of Archaeopteryx — and those marvelously preserved feathers — that differentiated the two.
What the pair of researchers at University College did was to compare the different parts of the skull across the whole clade of birds. What they found was sharply different rates of evolutionary change among different parts of the skull. Some parts of the skull changed rapidly early in the development of birds, then settled down to be amazingly consistent across a wide rage of species and time. Other parts have maintained a lower, but steady rate of change and differentiation.
We find that the avian cranium is highly modular, consisting of seven independently evolving anatomical regions. The face and cranial vault evolve faster than other regions, showing several bursts of rapid evolution. … Psittaciformes (parrots) exhibit high evolutionary rates throughout the skull, but their close relatives, Falconiformes, exhibit rapid evolution in only the rostrum. Our dense sampling of cranial shape variation demonstrates that the bird skull has evolved in a mosaic fashion reflecting the developmental origins of cranial regions, with a semi-independent tempo and mode of evolution across phenotypic modules facilitating this hyperdiverse evolutionary radiation.
The completeness of this study was enough to generate a separate article from a researcher at Bath, simply pointing out the beauty and importance of this work.
In PNAS, Felice and Goswami exemplify the vanguard of comparative vertebrate morphology by taking up the challenge of characterizing and analyzing avian phenotypic disparity on a scale that was, until quite recently, unimaginable.
That seems like the kind of praise that would leave any researcher … preening.
How primates store the image for a face.
Those of us with varying degrees of prosopagnosia can now be jealous of research monkeys, as laboratory macaques have demonstrated the ability to store and recall human faces. However, that jealousy should be tempered by the fact that they way this was known was that the macaques’ brains were closely observed to locate a patch of only about 100 neurons whose firing could be used to decode the face they were seeing.
As it turns out, macaques don’t seem to encode every face as a unique image. Instead, you can think of it as one of those kits that police sketch artists sometimes use — that kind of eyes, this sort of face shape, hair kind of like that, etc. The result is that each face can be turned into a brief code that when decoded, can pick out the right components to assemble an individual face.
An earlier article included examples of how the researchers were able to build up a set of face-bits that would allow them to generate a predicted face from the code they were seeing in the macaques’ brains (check it out, it’s kind of … spooky).
This new article looks at just how good the macaques’ code is and finds it very good. In fact, it’s pretty much optimized for picking out primate facial features and combining them into the smallest expression possible. It also isn’t the first such code researchers have found.
I conclude that the anterior medial face patch uses a combinatorial rate code, one with an exponential distribution of neuron rates that has a mean rate conserved across faces. Thus, the face code is maximally informative (technically, maximum entropy) and is very similar to the code used by the fruit fly olfactory system.
It shouldn’t be surprising that 4 billion years of evolution not only built nervous systems, but generated highly optimized ways to store information in them.
Nature offers an editorial on science “After a year of President Trump.”
After a year of President Trump, scientists in the United States are doing their best in difficult circumstances, and Nature applauds them for it. It’s increasingly clear that Trump has been just as bad for many aspects of science as we and others feared. Most crucially, the role of science and scientific advice in public life has been repeatedly undermined.