Menu

Blog

Archive for the ‘futurism’ category: Page 1211

Feb 16, 2008

Safeguarding Humanity

Posted by in categories: existential risks, futurism

I was born into a world in which no individual or group claimed to own the mission embodied in the Lifeboat Foundation’s two-word motto. Government agencies, charitable organizations, universities, hospitals, religious institutions — all might have laid claim to some peace of the puzzle. But safeguarding humanity? That was out of everyone’s scope. It would have been a plausible motto only for comic-book organizations such as the Justice League or the Guardians of the Universe.

Take the United Nations, conceived in the midst of the Second World War and brought into its own after the war’s conclusion. The UN Charter states that the United Nations exists:

  • to save succeeding generations from the scourge of war, which twice in our lifetime has brought untold sorrow to mankind, and
  • to reaffirm faith in fundamental human rights, in the dignity and worth of the human person, in the equal rights of men and women and of nations large and small, and
  • to establish conditions under which justice and respect for the obligations arising from treaties and other sources of international law can be maintained, and
  • to promote social progress and better standards of life in larger freedom

All of these are noble, and incredibly important, aims. But even the United Nations manages to name only one existential risk, warfare, which it is pledged to help prevent. Anyone reading this can probably cite a half dozen more.

It is both exciting and daunting to live in an age in which a group like the Lifeboat Foundation can exist outside of the realm of fantasy. It’s exciting because our awareness of possibility is so much greater than it was even a generation or two ago. And it is daunting for exactly the same reason. We can envision plausible triumphs for humanity that really do transcend our wildest dreams, or at least our most glorious fantasies as articulated a few decades ago. Likewise, that worst of all possible outcomes — the sudden and utter disappearance of our civilization, or of our species, or of life itself — now presents itself as the end result of not just one possible calamity, but of many.

Continue reading “Safeguarding Humanity” »

Feb 3, 2008

Spending Effectively

Posted by in categories: finance, futurism, lifeboat

Last year, the Singularity Institute raised over $500,000. The World Transhumanist Association raised $50,000. The Lifeboat Foundation set a new record for the single largest donation. The Center for Responsible Nanotechnology’s finances are combined with those of World Care, a related organization, so the public can’t get precise figures. But overall, it’s safe to say, we’ve been doing fairly well. Most not-for-profit organizations aren’t funded adequately; it’s rare for charities, even internationally famous ones, to have a large full-time staff, a physical headquarters, etc.

The important question is, now that we’ve accumulated all of this money, what are we going to spend it on? It’s possible, theoretically, to put it all into Treasury bonds and forget about it for thirty years, but that would be an enormous waste of expected utility. In technology development, the earlier the money is spent (in general), the larger the effect will be. Spending $1M on a technology in the formative stages has a huge impact, probably doubling the overall budget or more. Spending $1M on a technology in the mature stages won’t even be noticed. We have plenty of case studies: Radios. TVs. Computers. Internet. Telephones. Cars. Startups.

The opposite danger is overfunding the project, commonly called “throwing money at the problem”. Hiring a lot of new people without thinking about how they will help is one common symptom. Having bloated layers of middle management is another. To an outside observer, it probably seems like we’re reaching this stage already. Hiring a Vice President In Charge Of Being In Charge doesn’t just waste money; it causes the entire organization to lose focus and distracts everyone from the ultimate goal.

I would suggest a top-down approach: start with the goal, figure out what you need, and get it. The opposite approach is to look for things that might be useful, get them, then see how you can complete a project with the stuff you’ve acquired. NASA is an interesting case study, as they followed the first strategy for a number of years, then switched to the second one.

Continue reading “Spending Effectively” »

Jan 29, 2008

Cheap (tens of dollars) genetic lab on a chip systems could help with pandemic control

Posted by in categories: biological, defense, existential risks, futurism, lifeboat

Cross posted from Next big future

Since a journal article was submitted to the Royal Society of Chemistry, the U of Alberta researchers have already made the processor and unit smaller and have brought the cost of building a portable unit for genetic testing down to about $100 Cdn. In addition, these systems are also portable and even faster (they take only minutes). Backhouse, Elliott and McMullin are now demonstrating prototypes of a USB key-like system that may ultimately be as inexpensive as standard USB memory keys that are in common use – only tens of dollars. It can help with pandemic control and detecting and control tainted water supplies.

This development fits in with my belief that there should be widespread inexpensive blood, biomarker and genetic tests to help catch disease early and to develop an understanding of biomarker changes to track disease and aging development. We can also create adaptive clinical trials to shorten the development and approval process for new medical procedures


The device is now much smaller than size of a shoe-box (USB stick size) with the optics and supporting electronics filling the space around the microchip

Continue reading “Cheap (tens of dollars) genetic lab on a chip systems could help with pandemic control” »

Jan 25, 2008

On the brink of Synthetic Life: DNA synthesis has increased twenty times to full bacteria size

Posted by in categories: biological, biotech/medical, defense, existential risks, futurism, lifeboat, military, nanotechnology

Reposted from Next Big Future which was advancednano.

A 582,970 base pair sequence of DNA has been synthesized.

It’s the first time a genome the size of a bacterium has chemically been synthesized that’s about 20 times longer than [any DNA molecule] synthesized before.

This is a huge increase in capability. It has broad implications for DNA nanotechnology and synthetic biology.

Continue reading “On the brink of Synthetic Life: DNA synthesis has increased twenty times to full bacteria size” »

Jan 13, 2008

Lifeboat Foundation SAB member asks “Is saving humanity is worth the cost?”

Posted by in categories: defense, futurism, geopolitics, lifeboat

In his most recent paper “Reducing the Risk of Human Extinction,” SAB member Jason G. Matheny approached the topic of human extinction from what is unfortunately a somewhat unusual angle. Jason examined the cost effectiveness of preventing humanity’s extinction due to a catastrophic asteroid impact.

Even with some rather pessimistic assumptions, his calculations showed a pretty convincing return on investment. For only about US$ 2.50 per life year saved, Matheny predicts that we could mitigate the risk of humanity being killed off by a large asteroid. Maybe it’s just me, but it sounds pretty compelling.

Matheny also made a very good point that we all should ponder when we consider how our charitable giving and taxes gets spent. “We take extraordinary measures to protect some endangered species from extinction. It might be reasonable to take extraordinary measures to protect humanity from the same.”

For more coverage on this important paper please see the October 2007 issue of Risk Analysis and a recent edition of Nature News.

Jan 8, 2008

First Impressions

Posted by in category: futurism

I was engaged in a conversation the other day with someone about my new association with the Lifeboat Foundation and the opportunity that was presented to me to sit on one of the scientific advisory boards. Let me first point out that the person I was talking with is extremely intelligent, but has a lay person’s knowledge of scientific topics, and is generally unfamiliar with Singularity related concepts in particular.

I immediately realized the opportunity in associating with the organization, but still did some reasonable due diligence research before joining it. During the course of the conversation, I explained the goals of the Lifeboat Foundation. I also showed some of the current work that it is doing, and some of the people associated with it by randomly showing some of their biographies. However, when I presented leading biomedical gerontologist Dr. Aubrey de Grey’s biography, I was confronted with what was essentially an ad hominem argument regarding his trademark beard. I refer to this as an ad hominem argument because this person believed, without having previously seen or met Dr. de Grey, that his long beard was the sign of a large ego and that he was doing his cause a disservice by conveying a negative image to the public.

I do not personally know Dr. de Grey, nor do I know the reasons why he chooses to have a long beard. To me, the issue of his beard length has no bearing on the value of his work, and although I do not choose to wear a beard at the present time, I thrive on living in a world of diversity where one can do so. What I have gathered about Dr. de Grey is that he is a highly respected member of this community who has many important things to say. The situation was ironic because Dr. de Grey does research that relates to a medical condition affecting a member of this person’s family.

I know the point that the person I was speaking with was honestly felt, and that she believed Dr. de Grey could better serve his cause by changing his appearance. But unconscious bias is something that affects all of us to some degree, and it is a subtle, but insidious error in reasoning. Fifty years ago, in the United States, with a different person, this discussion might have been about the color of someone’s skin. Twenty-five years ago, it could have been about someone’s sexual orientation. It’s easy to see the errors in rational thinking of others looking in retrospect, but it’s much harder to find our own biases. I long to know what errors in thinking style and biases that I myself harbor now, and which will only be evident with a clearer perspective in the future. As such, I will continue to follow the Overcoming Bias web site to help me in my journey.

Continue reading “First Impressions” »

Jan 2, 2008

The Enlightenment Strikes Back

Posted by in categories: complex systems, futurism, geopolitics, lifeboat, nanotechnology, open access, sustainability

In a recent conversation on our discussion list, Ben Goertzel, a rising star in artificial intelligence theory, expressed skepticism that we could keep a “modern large-scale capitalist representative democracy cum welfare state cum corporate oligopoly” going for much longer.

Indeed, our complex civilization currently does seem to be under a lot of stress.

Lifeboat Foundation Scientific Advisory Board member and best-selling author David Brin’s reply was quite interesting.

David writes:

Continue reading “The Enlightenment Strikes Back” »

Nov 29, 2007

Planning for First Lifeboat Foundation Conference Underway

Posted by in categories: biological, biotech/medical, cybercrime/malcode, defense, existential risks, futurism, geopolitics, lifeboat, nanotechnology, robotics/AI, space

Planning for the first Lifeboat Foundation conference has begun. This FREE conference will be held in Second Life to keep costs down and ensure that you won’t have to worry about missing work or school.

While an exact date has not yet been set, we intend to offer you an exciting line up of speakers on a day in the late spring or early summer of 2008.

Several members of Lifeboat’s Scientific Advisory Board (SAB) have already expressed interest in presenting. However, potential speakers need not be Lifeboat Foundation members.

If you’re interested in speaking, want to help, or you just want to learn more, please contact me at [email protected].

Nov 19, 2007

Helphookup.com internet empowered volunteers against disasters

Posted by in categories: defense, existential risks, futurism, lifeboat, open access, open source, sustainability

The inspiration of Help Hookup is actually a comic book called Global Frequency by Warren Ellis. My brother, Alvin Wang, took the idea to startup weekend and they launched the idea this past weekend for hooking up volunteers. It is similar to the concepts of David Brin’s “empowered citizens” and Glenn Reynolds “an army of Davids”. The concepts are compatible with the ideas and causes of the Lifeboat foundation.

Global Frequency was a network of 1,001 people that handled the jobs that the governments did not have the will to handle. I thought that it was a great idea and it would be more powerful with 1,000,001 people or 100,000,001 people. We would have to leave out the killing that was in the comic.

Typhoons, earthquakes, and improperly funded education could all be handled. If there is a disaster, doctors could volunteer. Airlines could provide tickets. Corporations could provide supples. Trucking companies could provide transportation. Etc. State a need, meet the need. No overhead. No waste.

The main site is here it is a way for volunteers to hookup

The helphookup blog is tracking the progress.

Nov 12, 2007

Social Software Society for Safety

Posted by in category: futurism

Social Software Society for Safety.

Is there any scarcity? Perhaps friendship, because it requires time, shared history, and attention, is the ultimate scarcity—but must it always be the case?

A thoroughgoing naturalist, I stipulate that the value of all objects supervenes on their natural properties—rational evaluation of them is constrained by the facts. If I choose one car instead if its identical copy, simply because one has been stamped with a “brand,” this is the very definition of irrationality—if the 2 objects are exactly the same—you must be indifferent or violate the axioms of decision theory/identity theory. If I used a Replicator Ray to duplicate the Hope Diamond—which would you choose—the original—based on its history (was stolen, traveled around the world, etc) or the duplicate—they are identical!!

What happens to the value of the original? It is worth ½ because now there are 2? I make a 3rd copy so now it is worth 1/3? Nonsense—value has nothing to do with scarcity—a piece of feces may be totally unique in shape, just like a snowflake—but it has no value. Intrinsic value of objects depends on their properties. Instrumental value depends on what they can be used for (converted to intrinsic value).

Continue reading “Social Software Society for Safety” »