Menu

Blog

Archive for the ‘futurism’ category: Page 1210

Jan 29, 2008

Cheap (tens of dollars) genetic lab on a chip systems could help with pandemic control

Posted by in categories: biological, defense, existential risks, futurism, lifeboat

Cross posted from Next big future

Since a journal article was submitted to the Royal Society of Chemistry, the U of Alberta researchers have already made the processor and unit smaller and have brought the cost of building a portable unit for genetic testing down to about $100 Cdn. In addition, these systems are also portable and even faster (they take only minutes). Backhouse, Elliott and McMullin are now demonstrating prototypes of a USB key-like system that may ultimately be as inexpensive as standard USB memory keys that are in common use – only tens of dollars. It can help with pandemic control and detecting and control tainted water supplies.

This development fits in with my belief that there should be widespread inexpensive blood, biomarker and genetic tests to help catch disease early and to develop an understanding of biomarker changes to track disease and aging development. We can also create adaptive clinical trials to shorten the development and approval process for new medical procedures


The device is now much smaller than size of a shoe-box (USB stick size) with the optics and supporting electronics filling the space around the microchip

Continue reading “Cheap (tens of dollars) genetic lab on a chip systems could help with pandemic control” »

Jan 25, 2008

On the brink of Synthetic Life: DNA synthesis has increased twenty times to full bacteria size

Posted by in categories: biological, biotech/medical, defense, existential risks, futurism, lifeboat, military, nanotechnology

Reposted from Next Big Future which was advancednano.

A 582,970 base pair sequence of DNA has been synthesized.

It’s the first time a genome the size of a bacterium has chemically been synthesized that’s about 20 times longer than [any DNA molecule] synthesized before.

This is a huge increase in capability. It has broad implications for DNA nanotechnology and synthetic biology.

Continue reading “On the brink of Synthetic Life: DNA synthesis has increased twenty times to full bacteria size” »

Jan 13, 2008

Lifeboat Foundation SAB member asks “Is saving humanity is worth the cost?”

Posted by in categories: defense, futurism, geopolitics, lifeboat

In his most recent paper “Reducing the Risk of Human Extinction,” SAB member Jason G. Matheny approached the topic of human extinction from what is unfortunately a somewhat unusual angle. Jason examined the cost effectiveness of preventing humanity’s extinction due to a catastrophic asteroid impact.

Even with some rather pessimistic assumptions, his calculations showed a pretty convincing return on investment. For only about US$ 2.50 per life year saved, Matheny predicts that we could mitigate the risk of humanity being killed off by a large asteroid. Maybe it’s just me, but it sounds pretty compelling.

Matheny also made a very good point that we all should ponder when we consider how our charitable giving and taxes gets spent. “We take extraordinary measures to protect some endangered species from extinction. It might be reasonable to take extraordinary measures to protect humanity from the same.”

For more coverage on this important paper please see the October 2007 issue of Risk Analysis and a recent edition of Nature News.

Jan 8, 2008

First Impressions

Posted by in category: futurism

I was engaged in a conversation the other day with someone about my new association with the Lifeboat Foundation and the opportunity that was presented to me to sit on one of the scientific advisory boards. Let me first point out that the person I was talking with is extremely intelligent, but has a lay person’s knowledge of scientific topics, and is generally unfamiliar with Singularity related concepts in particular.

I immediately realized the opportunity in associating with the organization, but still did some reasonable due diligence research before joining it. During the course of the conversation, I explained the goals of the Lifeboat Foundation. I also showed some of the current work that it is doing, and some of the people associated with it by randomly showing some of their biographies. However, when I presented leading biomedical gerontologist Dr. Aubrey de Grey’s biography, I was confronted with what was essentially an ad hominem argument regarding his trademark beard. I refer to this as an ad hominem argument because this person believed, without having previously seen or met Dr. de Grey, that his long beard was the sign of a large ego and that he was doing his cause a disservice by conveying a negative image to the public.

I do not personally know Dr. de Grey, nor do I know the reasons why he chooses to have a long beard. To me, the issue of his beard length has no bearing on the value of his work, and although I do not choose to wear a beard at the present time, I thrive on living in a world of diversity where one can do so. What I have gathered about Dr. de Grey is that he is a highly respected member of this community who has many important things to say. The situation was ironic because Dr. de Grey does research that relates to a medical condition affecting a member of this person’s family.

I know the point that the person I was speaking with was honestly felt, and that she believed Dr. de Grey could better serve his cause by changing his appearance. But unconscious bias is something that affects all of us to some degree, and it is a subtle, but insidious error in reasoning. Fifty years ago, in the United States, with a different person, this discussion might have been about the color of someone’s skin. Twenty-five years ago, it could have been about someone’s sexual orientation. It’s easy to see the errors in rational thinking of others looking in retrospect, but it’s much harder to find our own biases. I long to know what errors in thinking style and biases that I myself harbor now, and which will only be evident with a clearer perspective in the future. As such, I will continue to follow the Overcoming Bias web site to help me in my journey.

Continue reading “First Impressions” »

Jan 2, 2008

The Enlightenment Strikes Back

Posted by in categories: complex systems, futurism, geopolitics, lifeboat, nanotechnology, open access, sustainability

In a recent conversation on our discussion list, Ben Goertzel, a rising star in artificial intelligence theory, expressed skepticism that we could keep a “modern large-scale capitalist representative democracy cum welfare state cum corporate oligopoly” going for much longer.

Indeed, our complex civilization currently does seem to be under a lot of stress.

Lifeboat Foundation Scientific Advisory Board member and best-selling author David Brin’s reply was quite interesting.

David writes:

Continue reading “The Enlightenment Strikes Back” »

Nov 29, 2007

Planning for First Lifeboat Foundation Conference Underway

Posted by in categories: biological, biotech/medical, cybercrime/malcode, defense, existential risks, futurism, geopolitics, lifeboat, nanotechnology, robotics/AI, space

Planning for the first Lifeboat Foundation conference has begun. This FREE conference will be held in Second Life to keep costs down and ensure that you won’t have to worry about missing work or school.

While an exact date has not yet been set, we intend to offer you an exciting line up of speakers on a day in the late spring or early summer of 2008.

Several members of Lifeboat’s Scientific Advisory Board (SAB) have already expressed interest in presenting. However, potential speakers need not be Lifeboat Foundation members.

If you’re interested in speaking, want to help, or you just want to learn more, please contact me at [email protected].

Nov 19, 2007

Helphookup.com internet empowered volunteers against disasters

Posted by in categories: defense, existential risks, futurism, lifeboat, open access, open source, sustainability

The inspiration of Help Hookup is actually a comic book called Global Frequency by Warren Ellis. My brother, Alvin Wang, took the idea to startup weekend and they launched the idea this past weekend for hooking up volunteers. It is similar to the concepts of David Brin’s “empowered citizens” and Glenn Reynolds “an army of Davids”. The concepts are compatible with the ideas and causes of the Lifeboat foundation.

Global Frequency was a network of 1,001 people that handled the jobs that the governments did not have the will to handle. I thought that it was a great idea and it would be more powerful with 1,000,001 people or 100,000,001 people. We would have to leave out the killing that was in the comic.

Typhoons, earthquakes, and improperly funded education could all be handled. If there is a disaster, doctors could volunteer. Airlines could provide tickets. Corporations could provide supples. Trucking companies could provide transportation. Etc. State a need, meet the need. No overhead. No waste.

The main site is here it is a way for volunteers to hookup

The helphookup blog is tracking the progress.

Nov 12, 2007

Social Software Society for Safety

Posted by in category: futurism

Social Software Society for Safety.

Is there any scarcity? Perhaps friendship, because it requires time, shared history, and attention, is the ultimate scarcity—but must it always be the case?

A thoroughgoing naturalist, I stipulate that the value of all objects supervenes on their natural properties—rational evaluation of them is constrained by the facts. If I choose one car instead if its identical copy, simply because one has been stamped with a “brand,” this is the very definition of irrationality—if the 2 objects are exactly the same—you must be indifferent or violate the axioms of decision theory/identity theory. If I used a Replicator Ray to duplicate the Hope Diamond—which would you choose—the original—based on its history (was stolen, traveled around the world, etc) or the duplicate—they are identical!!

What happens to the value of the original? It is worth ½ because now there are 2? I make a 3rd copy so now it is worth 1/3? Nonsense—value has nothing to do with scarcity—a piece of feces may be totally unique in shape, just like a snowflake—but it has no value. Intrinsic value of objects depends on their properties. Instrumental value depends on what they can be used for (converted to intrinsic value).

Continue reading “Social Software Society for Safety” »

Oct 29, 2007

One shot Gene therapy protection from radiation

Posted by in categories: defense, existential risks, futurism, lifeboat, nuclear weapons

University of Pittsburgh researchers injected a therapy previously found to protect cells from radiation damage into the bone marrow of mice, then dosed them with some 950 roentgens of radiation — nearly twice the amount needed to kill a person in just five hours. Nine in 10 of the therapy-receiving mice survived, compared to 58 percent of the control group.

Between 30 and 330 days, there were no differences in survival rates between experiment and control group mice, indicating that systemic MnSOD-PL treatment was not harmful to survival.

The researchers will need to verify whether this treatment would work in humans.

This is part of the early development in the use of genetic modification to increase the biological defences (shields) of people against nuclear, biological and chemical threats. We may not be able to prevent all attacks, so we should improve our toughness and survivability. We should still try to stop the attacks and create the conditions for less attacks.

Aug 21, 2007

Risks Not Worth Worrying About

Posted by in categories: defense, futurism, lifeboat

There are dozens of published existential risks; there are undoubtedly many more that Nick Bostrom did not think of in his paper on the subject. Ideally, the Lifeboat Foundation and other organizations would identify each of these risks and take action to combat them all, but this simply isn’t realistic. We have a finite budget and a finite number of man-hours to spend on the problem, and our resources aren’t even particularly large compared with other non-profit organizations. If Lifeboat or other organizations are going to take serious action against existential risk, we need to identify the areas where we can do the most good, even at the expense of ignoring other risks. Humans like to totally eliminate risks, but this is a cognitive bias; it does not correspond to the most effective strategy. In general, when assessing existential risks, there are a number of useful heuristics:

- Any risk which has become widely known, or an issue in contemporary politics, will probably be very hard to deal with. Thus, even if it is a legitimate risk, it may be worth putting on the back burner; there’s no point in spending millions of dollars for little gain.

- Any risk which is totally natural (could happen without human intervention), must be highly improbable, as we know we have been on this planet for a hundred thousand years without getting killed off. To estimate the probability of these risks, use Laplace’s Law of Succession.

- Risks which we cannot affect the probability of can be safely ignored. It does us little good to know that there is a 1% chance of doom next Thursday, if we can’t do anything about it.

Continue reading “Risks Not Worth Worrying About” »