Data Architect, Ph.D, Information Technologist, Gamer
6221 stories
·
26 followers

Freefall 3151 July 20, 2018

1 Share


-- Delivered by Feed43 service

Read the whole story
denubis
2 hours ago
reply
Sydney, Australia
Share this story
Delete

The two-time pad: midwife of information theory?

2 Shares

The NSA has declassified a fascinating account by John Tiltman, one of Britain’s top cryptanalsysts during world war 2, of the work he did against Russian ciphers in the 1920s and 30s.

In it, he reveals (first para, page 8) that from the the time the Russians first introduced one-time pads in 1928, they actually allowed these pads to be used twice.

This was still a vast improvement on the weak ciphers and code books the Russians had used previously. Tiltman notes ruefully that “We were hardly able to read anything at all except in the case of one or two very stereotyped proforma messages”.

Now after Gilbert Vernam developed encryption using xor with a key tape, Joseph Mauborgne suggested using it one time only for security, and this may have seemed natural in the context of a cable company. When the Russians developed their manual system (which may have been inspired by the U.S. work or a German one-time pad developed earlier in the 1920s) they presumably reckoned that using them twice was safe enough.

They were spectacularly wrong. The USA started Operation Venona in 1943 to decrypt messages where one-time pads had been reused, and this later became one of the first applications of computers to cryptanalysis, leading to the exposure of spies such as Blunt and Cairncross. The late Bob Morris, chief scientist at the NSA, used to warn us enigmatically of “The Two-time pad”. The story up till now was that the Russians must have reused pads under pressure of war, when it became difficult to get couriers through to embassies. Now it seems to have been Russian policy all along.

Many people have wondered what classified war work might have inspired Claude Shannon to write his stunning papers at the end of WW2 in which he established the mathematical basis of cryptography, and of information more generally.

Good research usually comes from real problems. And here was a real problem, which demanded careful clarification of two questions. Exactly why was the one-time pad good and the two-time pad bad? And how can you measure the actual amount of information in an English (or Russian) plaintext telegram: is it more or less than half the amount of information you might squeeze into that many bits? These questions are very much sharper for the two-time pad than for rotor machines or the older field ciphers.

That at least was what suddenly struck me on reading Tiltman. Of course this is supposition; but perhaps there are interesting documents about Shannon’s war work to be flushed out with freedom of information requests. (Hat tip: thanks to Dave Banisar for pointing us at the Tiltman paper.)

Read the whole story
denubis
2 days ago
reply
Sydney, Australia
Share this story
Delete

Saturday Morning Breakfast Cereal - Consensus

6 Shares


Click here to go see the bonus panel!

Hovertext:
This will be the best part about The Internet of Things.


Today's News:
Read the whole story
denubis
2 days ago
reply
Sydney, Australia
Share this story
Delete

Fundamental Value Differences Are Not That Fundamental

1 Comment and 4 Shares

I.

Ozy (and others) talk about fundamental value differences as a barrier to cooperation.

On their model (as I understand it) there are at least two kinds of disagreement. In the first, people share values but disagree about facts. For example, you and I may both want to help the Third World. But you believe foreign aid helps the Third World, and I believe it props up corrupt governments and discourages economic self-sufficiency. We should remain allies while investigating the true effect of foreign aid, after which our disagreement will disappear.

In the second, you and I have fundamentally different values. Perhaps you want to help the Third World, but I believe that a country should only look after its own citizens. In this case there’s nothing to be done. You consider me a heartless monster who wants foreigners to starve, and I consider you a heartless monster who wants to steal from my neighbors to support random people halfway across the world. While we can agree not to have a civil war for pragmatic reasons, we shouldn’t mince words and pretend not to be enemies. Ozy writes (liberally edited, read the original):

From a conservative perspective, I am an incomprehensible moral mutant…I imagine someone actively rejoicing in denying a person a fair trial because they deserve to be in prison– not just accepting this as a grim reality, but thinking it is good and right and virtuous– and I shudder. They must feel similarly about me.

However, from my perspective, conservatives are perfectly willing to sacrifice things that actually matter in the world– justice, equality, happiness, an end to suffering– in order to suck up to unjust authority or help the wealthy and undeserving or keep people from having sex lives they think are gross.

There is, I feel, opportunity for compromise. An outright war would be unpleasant for everyone…And yet, fundamentally… it’s not true that conservatives as a group are working for the same goals as I am but simply have different ideas of how to pursue it…my read of the psychological evidence is that, from my value system, about half the country is evil and it is in my self-interest to shame the expression of their values, indoctrinate their children, and work for a future where their values are no longer represented on this Earth. So it goes.

And from the subreddit comment by GCUPokeItWithAStick:

I do think that at a minimum, if you believe that one person’s interests are intrinsically more important than another’s (or as the more sophisticated versions play out, that ethics is agent-relative), then something has gone fundamentally wrong, and this, I think, is the core of the distinction between left and right. Being a rightist in this sense is totally indefensible, and a sign that yes, you should give up on attempting to ascertain any sort of moral truth, because you can’t do it.

I will give this position its due: I agree with the fact/value distinction. I agree it’s conceptually very clear what we’re doing when we try to convince someone with our same values of a factual truth, and confusing and maybe impossible to change someone’s values.

But I think the arguments above are overly simplistic. I think rationalists might be especially susceptible to this kind of thing, because we often use economic models where an agent (or AI) has a given value function (eg “produce paperclips”) which generates its actions. This kind of agent really does lack common ground with another agent whose goal function is different. But humans rarely work like this. And even when they do, it’s rarely in the ways we think. We are far too quick to imagine binary value differences that line up exactly between Us and Them, and far too slow to recognize the complicated and many-scaled pattern of value differences all around us.

Eliezer Yudkowsky writes, in Are Your Enemies Innately Evil?:

On September 11th, 2001, nineteen Muslim males hijacked four jet airliners in a deliberately suicidal effort to hurt the United States of America. Now why do you suppose they might have done that? Because they saw the USA as a beacon of freedom to the world, but were born with a mutant disposition that made them hate freedom?

Realistically, most people don’t construct their life stories with themselves as the villains. Everyone is the hero of their own story. The Enemy’s story, as seen by the Enemy, is not going to make the Enemy look bad. If you try to construe motivations that would make the Enemy look bad, you’ll end up flat wrong about what actually goes on in the Enemy’s mind.

So what was going through the 9/11 hijackers’ minds? How many value differences did they have from us?

It seems totally possible that the hijackers had no value differences from me at all. If I believed in the literal truth of Wahhabi Islam – a factual belief – I might be pretty worried about the sinful atheist West. If I believed that the West’s sinful ways were destroying my religion, and that my religion encoded a uniquely socially beneficial way of life – both factual beliefs – I might want to stop it. And if I believed that a sufficiently spectacular terrorist attack would cause people all around the world to rise up and throw off the shackles of Western oppression – another factual belief – I might be prepared to sacrifice myself for the greater good. If I thought complicated Platonic contracts of cooperation and nonviolence didn’t work – sort of a factual belief – then my morals would no longer restrain me.

But of course maybe the hijackers had a bunch of value differences. Maybe they believed that American lives are worth nothing. Maybe they believed that striking a blow for their homeland is a terminal good, whether or not their homeland is any good or its religion is true. Maybe they believe any act you do in the name of God is automatically okay.

I have no idea how many of these are true. But I would hate to jump to conclusions, and infer from the fact that they crashed two planes that they believe crashing planes is a terminal good. Or infer from someone opposing abortion that they just think oppressing women is a terminal value. Or infer from people committing murder that they believe in murderism, the philosophy that says that murder is good. I think most people err on the side of being too quick to dismiss others as fundamentally different, and that a little charity in assessing their motives can go a long way.

II.

But that’s too easy. What about people who didn’t die in self-inflicted plane crashes, and who can just tell us their values? Consider the original example – foreign aid. I’ve heard many isolationists say in no uncertain terms that they believe we should not spend money to foreign countries, and that this is a basic principle and not just a consequence of some factual belief like that foreign countries would waste it. Meanwhile, I know other people who argue that we should treat foreigners exactly the same as our fellow citizens – indeed, that it would be an affront to basic compassion and to the unity of the human race not to do so. Surely this is a strong case for actual value differences?

My only counter to this line of argument is that almost nobody, me included, ever takes it seriously or to its logical conclusion. I have never heard any cosmopolitans seriously endorse the idea that the Medicaid budget should be mostly redirected from the American poor (who are already plenty healthy by world standards) and used to fund clinics in Africa, where a dollar goes much further. Perhaps this is just political expediency, and some would talk more about such a plan if they thought it could pass. But in that case, they should realize that they are very few in number, and that their value difference isn’t just with conservatives but with the overwhelming majority of their friends and their own side.

And if nativist conservatives are laughing right now, I know that some of them have given money to foreign countries affected by natural disasters. Some have even suggested the government do so – when the US government sent resources to Japan to help rescue survivors of the devastating Fukushima tsunami, I didn’t hear anyone talk about how those dollars could better be used at home.

Very few people have consistent values on questions like these. That’s because nobody naturally has principles. People take the unprincipled mishmash of their real opinions, extract principles out of it, and follow those principles. But the average person only does this very weakly, to the point of having principles like “it’s bad when you lie to me, so maybe lying is wrong in general” – and even moral philosophers do it less than a hundred percent and apply their principles inconsistently.

(this isn’t to say those who have consistent principles are necessarily any better grounded. I’ve talked a lot about shifting views of federalism: when the national government was against gay marriage, conservatives supported top-down decision-making at the federal level, and liberals protested for states’ rights. Then when the national government came out in support, conservatives switched to wanting states’ rights and liberals switched to wanting top-down federal decisions. We can imagine some principled liberal who, in 1995, said “It seems to me right now that state rights are good, so I will support them forevermore, even when it hurts my side”. But her belief still would have ended up basically determined by random happenstance; in a world where the government started out supporting gay marriage but switched to oppose it, she would have – and stick to – the opposite principle)

But I’m saying that what principle you verbalize (“I believe we must treat foreigners exactly as our own citizens!”) isn’t actually that interesting. In reality, there’s a wide spectrum of what people will do with foreigners. If we imagine it as a bell curve, the far right end has a tiny number of hyper-consistent people who oppose any government money going abroad unless it directly helps domestic citizens. A little further towards the center we get the people who say they believe this, but will support heroic efforts to rescue Japanese civilians from a tsunami. The bulge in the middle is people who want something like the current level of foreign aid, as long as it goes to sufficiently photogenic children. Further to the left, we get the people I’m having this discussion with, who usually support something like a bit more aid and open borders. And on the far left, we get another handful of hyper-consistent people, who think the US government should redirect the Medicaid budget to Africa.

If you’re at Point N in some bell curve, how far do you have to go before you come to someone with “fundamental value differences” from you? How far do you have to go before someone is inherently your enemy, cannot be debated with, and must be crushed in some kind of fight? If the answer is “any difference at all”, I regret to inform you that the bell curve is continuous; there may not be anyone with exactly the same position as you.

And that’s just the one issue of foreign aid. Imagine a hundred or a thousand such issues, all equally fraught. God help GCU, who goes further and says you’re “indefensible” if you believe any human’s interests are more important than any other’s. Does he (I’ll assume it’s a he) do more to help his wife when she’s sick than he would to help a random stranger? This isn’t meant to be a gotcha, it’s meant to be an example of how we formulate our morality. Person A cares more about his wife than a random person, and also donates some token amount to help the poor in Africa. He dismisses caring about his wife as noise, then extrapolates from the Africa donation to say “we must help all people equally”. Person B also cares more about his wife than a random person, and also donates some token amount to Africa. He dismisses the Africa donation as noise, then extrapolates from his wife to “we must care most about those closest to us”. I’m not saying that how each person frames his moral principle won’t have effects later down the line, but those effects will be the tail wagging the dog. If A and B look at each other and say “I am an everyone-equally-er, you are a people-close-to-you-first-er, we can never truly understand one another, we must be sworn enemies”, they’re putting a whole lot more emphasis on which string of syllables they use to describe their mental processes than really seems warranted.

Why am I making such a big deal of this? Isn’t a gradual continuous value difference still a value difference?

Yes. But I expect that (contra the Moral Foundations idea) both the supposed-nativist and the supposed-cosmopolitan have at least a tiny bit of the instinct toward nativism and the instinct toward cosmopolitanism. They may be suppressing one or the other in order to fit their principles. The nativist might be afraid that if he admitted any instinct toward cosmopolitanism, people could force him to stop volunteering at his community center, because his neighbor’s children are less important than starving Ethiopians and he should be helping them somehow instead. The cosmopolitan might be afraid that if he admitted any instinct toward preferring people close to him, it would justify a jingoistic I’ve-got-mine attitude that thinks of foreigners as subhuman.

But the idea that they’re inherently different, and neither can understand the other’s appeals or debate each other, is balderdash. A lot of the our-values-are-just-inherently-different talk I’ve heard centers around immigration. Surely liberals must have some sort of strong commitment to the inherent moral value of foreigners if they’re so interested in letting them into the country? Surely conservatives must have some sort of innate natives-first mentality to think they can just lock people out? But…

Okay. I admit this is a question about hard work and talents, which is a factual question. But we both know that you would get basically the same results if you asked “IMMIGRATION GOOD OR BAD?” or “DO IMMIGRANTS HAVE THE SAME RIGHTS TO BE IN THIS COUNTRY AS THE NATIVE BORN?” or whatever. And what we see is that this is totally contingent and dependent on the politics of the moment. Of all those liberals talking about how they can’t possibly comprehend conservatives because being against immigration would just require completely alien values, half of them were anti-immigrant ten years ago. Of all those conservatives talking about how liberals can never be convinced by mere debate because debate can’t cut across fundamental differences, they should try to figure out why their own party was half again as immigrant-friendly in 2002 as in 2010.

I don’t think anyone switched because of anything they learned in a philosophy class. They switched because it became mildly convenient to switch, and they had a bunch of pro-immigrant instincts and anti-immigrant instincts the whole time, so it was easy to switch which words came out of their mouths as soon as it became convenient to do so.

So if the 9/11 hijackers told me they truly placed zero value on American lives, I would at least reserve the possibility that sure, this is something you say when you want to impress your terrorist friends, but that in a crunch – if they saw an anvil about to drop on an American kid and had only a second to push him out of the way – they would end up having some of the same instincts as the rest of us.

III.

Is there anyone at all whom I am willing to admit definitely, 100%, in the most real possible way, has different values than I do?

I think so. I remember a debate I had with my ex-girlfriend. Both of us are atheist materialist-computationalist utilitarian rationalist effective altruist liberal-tarians with 99% similar views on every political and social question. On the other hand, it seemed axiomatic to me that it wasn’t morally good/obligatory to create extra happy people (eg have a duty to increase the population from 10,000 to 100,000 people in a way that might eventually create the Repugnant Conclusion), and it seemed equally axiomatic to her that it was morally good/obligatory to do that. We debated this maybe a dozen times throughout our relationship, and although we probably came to understand each other’s position a little more, and came to agree it was a hard problem with some intuitions on both sides, we didn’t come an inch closer to agreement.

I’ve had a few other conversations that ended with me feeling the same way. I may not be the typical Sierra Club member, but I consider myself an environmentalist in the sense of liking the environment and wanting it to be preserved. But I don’t think I value biodiversity for its own sake – if you offered me something useful in exchange for half of all species going extinct – promising that they would all be random snails, or sponges, or some squirrel species that looked exactly like other squirrel species, or otherwise not anything we cared about – I’d take it. If you offered me all charismatic megafauna being relegated to zoos in exchange for lots of well-preserved beautiful forests that people could enjoy whenever they wanted, I would take that one too. I know other people who consider themselves environmentalists who are horrified by this. Some of them agree with me on every single political issue that real people actually debate.

I think these kinds of things are probably real fundamental value difference. But if I’m not sure I have any fundamental value differences with the 9-11 hijackers, and I am sure I have one with one of the people I’m closest to in the entire world, how big a deal is it, exactly? The world isn’t made of Our Tribe with our fundamental values and That Tribe There with their fundamental values. It’s made of a giant mishmash of provisional things that solidify into values at some point but can be unsolidified by random chance or temporary advantage, and everyone probably has a couple unexplored value differences and unexpected value similarities with everyone else.

This means that trying to use shaming and indoctrination to settle value differences is going to be harder than you think. Successfully defeat the people on the other side of the One Great Binary Value Divide That Separates Us Into Two Clear Groups, and you’re going to notice you still have some value differences with your allies (if you don’t now, you will in ten years, when the political calculus changes slightly and their deepest ethical beliefs become totally different). Beat your allies, and you and the subset of remaining allies will still have value differences. It’s value differences all the way down. You will have an infinite number of fights, and you’re sure to lose some of them. Have you considered getting principles and using asymmetric weapons?

I’m not saying you don’t have to fight for your values. The foreign aid budget still has to be some specific number, and if your explicitly-endorsed principles disagree with someone else’s explicitly-endorsed principles, then you’ve got to fight them to determine what it is.

But “remember, liberals and conservatives have fundamental value differences, so they are two tribes that can’t coexist” is the wrong message. “Remember, everyone has weak and malleable value differences with everyone else, and maybe a few more fundamental ones though it’s hard to tell, and neither type necessarily line up with tribes at all, so they had damn better learn to coexist,” is more like it.

Read the whole story
francisga
2 days ago
reply
Lafayette, LA, USA
denubis
2 days ago
reply
Sydney, Australia
Share this story
Delete
1 public comment
StatsGuru
2 days ago
reply
Another example of why Summer is my favorite blogger.

Canned Monkeys Don't Ship Well, the Remix Version

1 Share

You thought you were rid of me. Sorry, Charlie's still on vacation for a few more days, so here's something that has nothing to do with current politics. Just to be annoying, I'm going to revisit that ever-giving fount of joy, slower than light (STL) interstellar travel. You may think that, because it's not physically impossible, that it's inevitable that humans will travel this way one day. Sadly, it looks like blasting your way between the stars the hard way requires magical technology too, just as FTL does.

We've talked about this before on the blog, but unfortunately, the really good conversation was about 800 comments in and about 8 (?) years ago, so you can't just google it. Here, I'm going to cover two points: why canned monkeys don't ship well, and what the precursors to STL would look like, so that we'll know if our society ever starts preparing itself to expand into space at less than the speed of light.

"Canned monkeys don't ship well" refers to the problem of keeping people alive in interplanetary or interstellar space (this for the two people who didn't know it already). There are a lot of problems, what with providing air, water, food, radiation protection, decent meteor defenses, a working clothes cleaner, producing food reliably, recycling trash, keeping people healthy and able to step onto a planet again, and last but not least, completing a human life cycle from conception through birth to maturity and senescence. Many of these are provided by Earth, and the rest require more space than anyone currently has on the International Space Station (ISS). That's why, for instance, they don't have a clothes washer on the ISS. They wear their clothes for a week or so depending on what it is, and throw them out. More insidious problems have to do with what the lack of gravity does to the health of humans, plants, and animals. Correct me if I'm wrong, but I don't think any plant or animal has successfully completed a life cycle (seed to seed or animal to animal) entirely in freefall. And, if you read Chris Hadfield's An Astronauts Guide to Life on Earth, he's quite candid about how extendedly unpleasant it was to reacclimatize to Earth after spending a year in space. It wasn't just reflexes--his feet couldn't tolerate the weight on them, he had rashes and all sorts of weird symptoms that took days to go away, and weakened bones that took at least a year to go away. It's uncomfortable to get into freefall, it's painful to get out of freefall after an extended time in it, it's not just humans that have problems, it seems to be most eukaryotes do, and we still need to figure out how to work around this. Magic, obviously. Just wave that wand, and the problems go away. But what exactly is the wand you're waving? CRISPR? Vibrating pants? Some wonderful pharmaceutical suitable for plants, humans, and fish (gravipramine?)?

Fine, you say, my interstellar ark will spin to make up for this problem. And the ship will be huge, so you can have not just your damned washing machine but vast pools of water as radiation shields (as in Anathem). This is great. Heck, we'll even assume that you have steering and propulsion systems that can handle pushing a great sloshing gyroscope in a precise direction for centuries. Yeah, that. It'll be fun to steer your spindizzy when there's a lot of weight moving around inside it. The wobbling thrust to compensate will be fun too, and keeping this coupled set of chaotic oscillators from going out of control will be easy, of course. All we need is a magic navigation system, magic because it doesn't just steer a wobbling gyroscope impeccably, it does it for centuries without error, and with rapid collision avoidance too. Isn't magitech wonderful?

This is where we get into the engineering challenges. I'm certain, for instance, that we can build computers that last 50 or 100 years. After all, the Voyager space probes are still kind of working, 40 years later. Actually, there's a fun little problem here: a few bespoke resilient computers for expensive space probes won't disrupt a consumer electronics market built on planned obsolescence, but what if you're building an effectively immortal (to a first approximation) system? Won't that decimate the local computer industry, when everybody wants a computer that they can pass on to their kids, rather than discarding, just so that a team of engineers can stay employed making replacements? After all, STL voyages last decades to centuries, and the electronics all have to work forever, with only onboard repairs. This is actually be one of the precursors to deep space colonization, that computers stop being made to fall apart, but instead are built simultaneously rugged, long-lived, and easy to repair. Or, of course, we could put an entire computer fabrication facility on every spaceship. I'm sure that won't take much weight. And swapping out the navigation system every few years should be really easy, too.

That's just one subsystem. If we're talking about a century or millennium long voyage (and note that these are optimistic given our current state of propulsive affairs), then to a first approximation, every bit of hardware either has to last the entire trip, or has to be totally repairable using the (recycled?) supplies brought along. Yes, yes, I know, 3-D printing. That will certainly be part of it, but don't you think that there's going to be critical infrastructure that just can't be reprinted ad nauseum, like critical structural elements and parts of the hull? You'll need really good (dare I say magical?) printing capabilities to reprint a big chunk of the ship from inside the ship. You'll also need a really efficient materials recycling facility to sort all the waste materials and efficiently remanufacture all the printer feedstocks. But heck, sorting stuff into pure materials streams and rebuilding it only takes lots of energy, time, know-how, and specialized technology, which is why we don't yet do this with municipal trash. Actually, trash management is another one of those little precursors: if idiot-proof urban recycling becomes a thing, we'll be one small step closer to space.

Then we've got the big noise, the interstellar medium and the fun of ramming into it at high speeds. Raising your kids in the middle of a firing range or next to the containment shell of a nuclear reactor is positively tame in comparison. Interplanetary and interstellar space are astonishingly good vacuums (better than we can readily make on Earth), but they're not empty. Worse, the stuff in space tends to move really, really fast, which means it has a lot of energy. Bullets travel at around 1 km/sec, but meteors travel at 10 km/sec and above, and an STL spaceship needs to get moving much faster than this to make decent time between the stars. Even the best steering system can't get a ship (especially a huge, spinning, sloshing ship) rapidly out of the way of some bit of interstellar debris. No, we need shields, and those shields need to be fixable or replaceable from inside the ship, because humans aren't going to survive very well either out in that shooting gallery. Yes, anti-micrometeorite armor (like Whipple shields) works on a different principle than terrestrial armor and wouldn't stop a bullet, but even it needs to be replaced, and a starship will occasionally run into bullet-sized space junk at ultraballistic speeds. So we need magical armor. And magical radiation shielding too, preferably in the shape of a mobile cowling, so that robots and humans can get outside and work on the starship hull, under cover, and not die rapidly. More magic! Or heck, I'd settle for a force field at this point.

And yes, there's a rocket firing for years to centuries to push the starship up to speed. How long do real world rockets fire for, again? The starship engine is another one of those magical technologies. While yes, ion engines have fired continuously for years (on the Dawn space probe, for example), their thrusts are tiny, equivalent to the weight of a piece or two of paper in your hand. Since space is effectively frictionless, those tiny thrusts add up, but only on relatively light-weight spacecraft, over interplanetary distances, and over a few years. We need extremely high thrust and for centuries, and it's not clear how to get this. The closest we might want to get is an Orion drive powered by hydrogen bombs, but then we've got to store those beasts indefinitely. And I'm sure everyone wants to grow up immediately adjacent to a nuclear test range, protected by some really, really good shielding that will have to be repaired in house, even though it's a wee bit radioactive.

Then, once we get to the new planet, we've got to land on it, repeatedly. So we need landers that can boost themselves back up to orbit, ideally in a single stage. That's easy, we're developing SSTO (single stage to orbit) technology now. Right? Well, the little interesting challenge is that your lander has to be full of fuel to take off again. Indeed, without magic fusion rockets or some such, almost all of the lander's weight when it lands has to be fuel. And it's going to be really hot on landing, as it decelerates from orbital speed (Mach 10+) down to zero. So you're flying the equivalent of an ostrich egg full of rocket fuel, and decelerating it from Mach 10 to zero, landing on a totally unimproved landing spot (so the lander either has to be able to hover or land in the water, take your pick), and then take off from that spot (or the water) again. And if you think the water launch of a supersonic plane is easy, you really should google "XF-2Y Sea Dart." Anyway, making conventional rocket landers more magitech to work. We could use Orion technology to land and take off, but then the lander is going to have to land, erm, quite a long distance from wherever the colony is. That's going to be a bit tedious, especially the part where they have to repair the road after every launch or landing.

Finally, we've got the problem of using the toilet. Yes, I know space toilets have come a long way. Here I'm talking about recycling nutrients, all 17 of them. People have tried living in closed ecosystems since the 1970s, and it's a chore. I saw a description of one DIY experiment that said that the man involved had to produce feces of the correct weight and composition every day, just to feed the recycling system that fed the plants that fed him. If you've got a small, closed ecosystem, shit can't just happen, it has to be excreted in precise amounts and on schedule. Earthly ecosystems are resilient to when poop happens because there are huge surpluses of some nutrients (like nitrogen in the air). This gives us a fair amount of slack in how nutrients get processed. Dead wood can lie around for centuries in the desert without causing all the plants around it to die from lack of carbon. Unfortunately, when you get into a smaller ecosystem, the surplus nutrient pools are smaller. So, if there's too much dead wood around (or unprocessed feces) you really could starve, and if you don't have enough oxygen for the microbes to break down the dead wood, you could suffocate as the microbes got to work recycling your waste. Biosphere 2 ran into problems associated with this. Ideally, you want the starship's biosphere to be as big as possible for stability. Simultaneously you need to minimize its weight and size to make it easier to send to another star system. Magic ecosystem handling? That's the easy solution. The hard solution is making sure that everyone on the ship is more capable of running an ecosystem than are almost all PhD ecologists currently working (that would include me, incidentally).

Speaking of which, the crew: all astronauts, the best of the best, right? Good breeding stock and all? And their grandkids are going to settle the new planet? Well, erm, yeah. There are problems here too. One is that humans don't breed true, so amazingly talented people tend to have less talented kids; it's called regression to the mean. In a multigenerational setting, you have to allow for incredibly talented initial crew getting old, becoming incompetent, and passing off their responsibilities to their less-talented offspring. That's tricky. You also have to allow for people being incapacitated, whether they are young, old, pregnant, sick, or drunk. Yes, drunk. One of my proposals for dealing with the shortcomings of a closed ecosystem was to designate 10% of the grain crop to making beer, so that people could get drunk occasionally. The point wasn't that alcoholism was good, it's that if your nutrient cycling is so tight that you can't afford the surplus crop needed for an occasional party, then you're absolutely incapable of dealing with problems that incapacitate part of the crew, let alone storing surplus food for when it's needed. Having a system that's resilient to people getting drunk occasionally is one way to make sure that your system can also deal with more serious problems. And, if there's a crop failure, the grain that would have gone to alcohol can be used for food.

I could go on, but there are three points that really need to be made instead. One is that STL can involve as much magic technology as FTL. It doesn't involve breaking Einstein, but Einstein's not the only scientific hurdle out there. All sorts of things are permitted by general relativity but physically or logistically impossible.

The second is that our species isn't ready for the stars. We're not magical enough. If we were getting close, the precursors for interstellar technology would already be around, changing our lives. For instance, if we could almost build a starship, it would be possible for an (evil) magnate to build a secret lair that was impenetrable to anything including a nuclear blast (starship shielding). His minions could take shelter in that lair, seal the entrances, and live in there under his dynasty for centuries, with no problem at all (closed ecosystem with indefinite recycling, plus social engineering). Climate change would be a non-issue for the super-rich, because their castles would be proof against anything the climate or outsiders could throw at them. And we'd have the equivalent of the GNP of Russia to literally throw away in making a starship that would send a few hundred people on a one way trip to a nearby star, since that's about the level of resources you'd need for a starship. So yeah, we're not there yet. This isn't to say that you can't write a story using STL, but it would be good if you spread the magic technology more widely than just in your ship. Why should people have starships in space, but only the Whole Earth Catalog planetside? Starship tech makes for great secret bases and mechanized armor, if nothing else. And every character won't be able to just fix a toilet, they'll have the whole system piped through their closet composting and growth chamber to feed them a treat a few weeks later. In an STL enabled world, proving you can take care of your own crap should be a rite of passage akin to getting a driver's license today.

The final point is one that I'm sure is well-known to SF cognoscenti: there's a reason so many SF writers have used FTL, gravity control, reactionless drives, and force fields. They make things easy. Instead of getting into the aeroponic weeds about how everybody must cycle their nutrients through the system for centuries, you just wave at least two of those magic wands and all of the difficult STL technical challenges go away. You can speed from star to star before your life support runs out, land on planets and take off as many times as you want, and interplanetary and interstellar meteors won't kill you, because you're not about to run into them at high speed without proper shielding. They're not stupid tropes, just overexposed because they're so gosh darned useful. I may be wrong, but I believe that the SF writers who originally proposed this tetrad knew enough about science and engineering to have a good idea of the problems they were avoiding by using them. Sadly, we've since discovered that the problems were even worse than they originally thought. Perhaps later generations of SF aficionados have forgotten and need to be reminded?

What did I miss? Heat, did you say? Power plants? Shipping corpsicles and thawing on arrival? Put 'em in the comments.

Read the whole story
denubis
3 days ago
reply
Sydney, Australia
Share this story
Delete

Installing a Credit Card Skimmer on a POS Terminal

1 Share

Watch how someone installs a credit card skimmer in just a couple of seconds. I don't know if the skimmer just records the data and is collected later, or if it transmits the data back to some base station.

Read the whole story
denubis
3 days ago
reply
Sydney, Australia
Share this story
Delete
Next Page of Stories