header image
 

Interesting times

May you live in interesting times, says a curse (wrongly) attributed to the ancient Chinese. Are times of change worse than idle stability? I don’t think so.

And we indeed live in interesting times. Last year brought us many breakthroughs and intriguing discoveries. We learned about the first small rocky exoplanet, Kepler-10b. Large Hadron Collider, the largest machine man ever built, finally observed some hints that may be attributed to Higgs boson, an elusive particle missing from the Standard Model of particle physics. Another experiment, OPERA, shocked the world with possibility that neutrinos may travel faster than light, notion at odds with Einstein’s theory of special relativity. Human clinical tests of a HIV vaccine were approved to start in early 2012. And hundreds more.

Technological progress accelerates. And accelerates with increasing speed. That means exponential growth of knowledge and abilities to change our world. At a conference in 2010, Google CEO said that mankind creates as much data in two days as it did from the beginning of civilization up until 2003. That’s of course a rough estimate, but it’s still striking. Amount of data produced globally more than doubles every two years. Somewhere in 2007 that amount exceeded storage capabilities of the world (which is also doubling, but at a slower rate). In 2010 quantity of newly created and replicated information exceeded one zettabyte for the first time. That’s 1000000000000000000000 bytes.

Human brain contains about 1011 neurons and 1014 synapses. Maximum frequency at which neurons can discharge is about 1 kilohertz. If every synapse in the brain processed one bit of information with maximum speed, that would give us theoretical upper limit of about 1017 bits per second as brain’s information processing capability. Of course in reality that number is much, much lower (limited mainly by sensory input), but let’s take 1016 bytes (ten million gigabytes) per second as the upper value. Such super-powered brain would need over two days to process 2011’s two zettabytes of new data. “That’s nothing!”, you might say. Let’s conduct a small thought experiment then. If the amount of data produced by human civilization rises by 50% each year (a very conservative stance) it will exceed 6*1033B (6 billion yottabytes) by 2083. World’s human population will almost reach 17 billion then, with 1.2% annual growth rate. If we calculate total computing power of 17 billion human super-brains at that time, it will be less than 6*1033 bytes per year. That means even if our brains reach impossible efficiency, humanity will not be able to comprehend all the data generated worldwide.

Of course that’s only very approximate estimation, assuming upper limits and steady exponential progression, not accounting for physical limitations etc. Accuracy is not the point however. Everything we have seen so far points to an explosive growth of processing power available to humans in the near future, and that our mental capacity will eventually be outclassed. Outclassed by artificially built computers, and most likely soon. Maybe even within our lifetimes.

What does that mean? Why is it so significant to think about the day when we build the first super-intelligent system? And is it even possible?

Human brain is a machine. Incredibly complex and precise one, but still only machine. It can be analyzed down to molecular level (that’s what Blue Brain project is aiming to do) and possibly replicated artificially or simulated. The same laws of physics govern neurons as well as transistors. Given time we will gain complete understanding of how our brains work. But it’s most likely not necessary for general artificial intelligence to be created. If quantum effects do not significantly impact brain’s operation, we should be fine with just map of neurons, synaptic connections and their states. And it would be foolish to expect that our brain is the only structure capable of intelligent thought. Intelligence may just be a property of sufficiently complex systems, quite possibly systems with different physical characteristics. We already know that many deceivingly simple systems exhibit self-emergent behavior. Cellular automata, nonlinear dynamic systems, even a recursive quadratic function. But I digress.

So we will finally create something that is more intelligent than us. Super-intelligence, SI for short. What then?

SI will not be a computer like we know today. It most likely won’t be programmed: it will “program” itself, like our brain. We will need to communicate with it in a meaningful way, which may not be easy. It may not even want to communicate with us. Yes – if such an intelligent being can set its own goals, those goals may not be in line with our goals. It can even perceive humans as a threat, if it learns of our existence. Rogue AI, such a lovely subject for science-fiction. Of course the reason for lack of communication may be more prosaic: if the SI is intelligent enough, it may just come to conclusion that exchanging data with such primitive creatures as humans is a waste of time. Who knows.

Let’s say we overcome all these difficulties and humanity is able to benefit from SI’s power. What now?

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make. – Good, I. J. (1965), Franz L. Alt and Morris Rubinoff, ed., “Speculations Concerning the First Ultraintelligent Machine”.

Indeed. Recursive self-improvement. Once the first SI is created, the exponential growth of our technological prowess will accelerate even further, to the point that present human would not be able to comprehend such future in any meaningful way. It’s like a veil covering things to come, asymptote in a plot, event horizon of a black hole. Thus we call such an event a technological singularity.

It’s not possible to predict impact of even “normal” inventions if they are radical enough. What would 18th century man say when presented with a present-day personal computer or a phone? How would he comprehend working of GPS system, which relies on Einstein’s theories for precise measurements? And technological singularity is far more radical than that.

We can try to come up with possible achievements of a post-singularity era. Most are incarnations of old and new dreams. Immortality achieved by repairing our bodies or creating new ones from scratch, biological or not. Better bodies, better brains integrated with the new superintelligent network. Breakthroughs in physics allowing manipulation of space and time. Working molecular nanotechnology shaping the world around and inside us as we desire, probably eliminating the need for material economy forever. Possibility to extend our consciousness, make snapshots of it, split and merge multiple copies of our mind. But that’s only speculation. In essence, such posthumans will probably rapidly advance to and beyond Kardashev’s Type I civilization, completely controlling their home planet. Come to think of it, what if some cosmic civilization achieved such state thousands or millions of years ago, still a very short time in the grand scheme of things? They would be like gods to us, almost omnipotent, incomprehensible, alien.

What’s striking is that we are most likely very close to the Singularity ourselves. We are transhumans, witnessing qualitative change of our civilization happening before our eyes. If we are lucky, we may live long enough to peek through the veil and become something else. Something greater. Something more.

~ ~ ~

Sources and/or recommended reading:
The Coming Technological Singularity: How to Survive in the Post-Human Era
The Law of Accelerating Returns
Wikipedia: Technological singularity, Exabyte, Zettabyte, Artificial Intelligence, Neuron, Mind uploading, Nanotechnology, Kardashev scale, Post-scarcity
New Measure of Human Brain Processing Speed
How Much Information? 2003
How much total information is there in the world?
The Expanding Digital Universe (2007), The Diverse and Exploding Digital Universe (2008), Extracting Value from Chaos (2011)

Fiction:
Charles StrossAccelerando and other works.
Stanisław Lem
Jacek Dukaj
Peter WattsBlindsight
Neal Stephenson
Bruce Sterling
William Gibson
Dan Simmons
Iain Banks
Walter Jon WilliamsAristoi
Ian McDonald

Vacations

December is a quiet month (as November was, mostly). I took a break from coding to “reset my mind”. Our rat family has grown temporarily, we had 17 beasts for one day, but that’s all temporary. Still only 4 of them are ours. I’ve been playing Skyrim, The Binding of Isaac and recently, Terraria with the new 1.1 update. After some fooling around with friends in multi-player, I started a single player playthrough with hardcore character. That means, if he dies – it’s over. It reminds me of roguelikes in that every move can be your last if you’re not careful. I already buried two characters – one died from falling damage in a corrupted chasm, the other was burned to a crisp by lava thanks to my lack of attention. We’ll see if the next one succeeds in at least activating the hard mode. It’s a very fun game, highly recommended. I also bought the new Dungeons of Dredmor DLC/expansion Realm of the Diggle Gods and will give it a shot soon. Dungeons of Dredmor is an indie graphical roguelike with insane sense of humor, containing skills like Fleshsmithing, Mathemagic, Necronomiconomics or Emomancy. It’s a great entry to the roguelike world if you haven’t play any yet.

That’s it for now, happy holidays everyone!

Kids Christmas Tree

Recently my friend published an iPhone/iPod/iPad game for kids, which is in essence decorating a virtual Christmas tree. It’s dirt cheap, fun and features wonderful hand-drawn art. Get it, you still have some time!

So, Heimdall

What I’ve been doing lately? Playing Skyrim, playing The Binding of Isaac, procrastinating and thinking about internal structure of a MMORPG game server, roughly in that order.

Wait, what? A MMORPG game server?

Yeah. Let this be an introduction to Heimdall – my personal pet project since around spring this year. Heimdall is (going to be) the engine for a (pseudo) isometric 2.5D MMORPG inspired by Ultima Online. I was a player and an administrator/programmer for many custom UO shards back in the days. I actually learned C# to be able to script RunUO, the best UO server emulator out there. Some time ago I and a couple of my friends came to a conclusion that it would be really nice to have a game like UO, but without its limitations, with our own world, lore and freedom to create things we would like to see out there. Game that is not another WoW clone, but a sandbox without meaningless “kill ten bears” quests all over the place, game where you could actually roleplay, fight and have good old-fashioned fun.

Of course at the beginning I was very skeptic to an idea of making a MMO from scratch. I’m a software engineer and a gamer, I know damn well how complicated it would be. To think of it, MMORPGs are probably one of the most complex pieces of software in existence. Let’s think about what topics you and your team need to know about:

  • Have damn good design skills to come up with a half-decent, maintanable and extendable architecture for the whole thing
  • Network programming on a pretty sophisticated level
  • Databases and storage backends
  • Security, on multiple levels (server, client, network)
  • Actual game design, which is enormous topic in itself (basic mechanics, world lore, map design, quests, encounters, etc etc)
  • Artificial Intelligence (or artificial stupidity) for your hostile entities
  • Graphics programming
  • Asset creation – drawing, modelling, creating sounds and music
  • GUI creation, usability, extendability of client UI
  • Patching, content distribution
  • Probably lots of other things I forgot

And that’s without a whole slew of issues that appear if you want to be “big”: server/database sharding, reliability, system/network engineering, knowledge of financial processing systems… Yeah, that’s A LOT of things to take care of. I knew that. But I still took the challenge. It may take years to come up with something usable, but it’s worth it for the learning experience alone for me.

So where do I stand now? I have the basic system architecture planned. I have an account/authentication server working (creating acounts, logging in, selecting game servers etc). I have a first prototype of graphical client in progress. Now I’m at the point of the biggest initial challenge (IMHO): to proceed with the initial prototype, I need to have base design for actual game server mechanics. First iteration requirement includes player avatars moving on the map, able to see each other. That means game server needs to process/store game world entities. Game entities need to be able to interact with each other, at least on a basic “perception” level. Coming up with a good, extendable and maintanable architecture for game entities is harrowing. Especially if you don’t know what will be added in the future.

At the moment I’m planning to use entity system approach instead of the traditional OO class tree. I’ll talk about it in more detail in a future post most likely, but that’s it for now.

OPERA neutrinos still rocking

So, the faster-than-light neutrinos may actually be real. Improved version of the original experiment confirms the initial result. Of course real “proof” would be repeating that in another place by an entirely different team. We’ll see. I’m cautiously optimistic.

On WinAPI timers and their resolution

While working on PuttyRec (that is practically rewriting it from scratch) I wondered about resolution of various Windows timer APIs. Of course that’s pretty irrelevant in the context of a terminal recorder, not that you need to cope with more that a few events per second there. But being the curious man I am (and sometimes a perfectionist when it comes to code) I set to investigate the matter myself.

What selection do we have then? There is the SetTimer, of course. PuttyRec used it before and that worked without much problems. SetTimer has its problems though, the biggest being that it’s only available to use on UI threads (that is, threads owning a message queue). Although creating a message queue is easy (just call any message-related API), that’s hardly convenient. It also has reputation of being more than slightly inaccurate.

Then we have timeSetEvent, a multimedia timer. It doesn’t require message queue (but also runs on a separate thread) and is supposed to be more accurate. The MSDN page contains a curious notice though: Note This function is obsolete. New applications should use CreateTimerQueueTimer to create a timer-queue timer. A “better” function, eh? I used it before and it also seemed to work well.

Interestingly MSDN doesn’t specify what resolution these functions have. That’s pretty useless if you need a high-precision timer. I wrote a quick and dirty test program that runs all three of the above timers for a period of one second with varying timer period. As a reference I included a “busy wait” loop that just spins for desired amount of time. Time measurement was done by QueryPerformanceCounter. Without further ado, here are the results (32bit release, but 64bit/debug is roughly the same):

Period: 100ms
CreateTimerQueueTimer    : count=   9, avg error= 0.085%, std dev= 0.193
timeSetEvent             : count=   9, avg error= 0.013%, std dev= 0.017
Busy wait                : count=   9, avg error= 0.000%, std dev= 0.000
SetTimer                 : count=   8, avg error= 9.629%, std dev= 2.060

Period: 50ms
CreateTimerQueueTimer    : count=  19, avg error= 0.109%, std dev= 0.140
timeSetEvent             : count=  19, avg error= 0.027%, std dev= 0.022
Busy wait                : count=  19, avg error= 0.000%, std dev= 0.000
SetTimer                 : count=  15, avg error=24.800%, std dev= 3.117

Period: 20ms
CreateTimerQueueTimer    : count=  49, avg error= 0.177%, std dev= 0.242
timeSetEvent             : count=  49, avg error= 0.106%, std dev= 0.095
Busy wait                : count=  49, avg error= 0.135%, std dev= 0.656
SetTimer                 : count=  31, avg error=55.518%, std dev=12.167

Period: 10ms
CreateTimerQueueTimer    : count=  99, avg error= 0.216%, std dev= 0.197
timeSetEvent             : count=  99, avg error= 0.192%, std dev= 0.257
Busy wait                : count=  99, avg error= 0.122%, std dev= 0.537
SetTimer                 : count=  63, avg error=60.329%, std dev=35.078

Period: 5ms
CreateTimerQueueTimer    : count= 199, avg error= 0.796%, std dev= 3.244
timeSetEvent             : count= 199, avg error= 0.257%, std dev= 0.380
Busy wait                : count= 199, avg error= 0.014%, std dev= 0.106
SetTimer                 : count=  63, avg error=212.087%, std dev=74.290

Period: 2ms
CreateTimerQueueTimer    : count= 499, avg error= 2.291%, std dev=12.107
timeSetEvent             : count= 499, avg error= 0.790%, std dev= 0.831
Busy wait                : count= 499, avg error= 0.006%, std dev= 0.005
SetTimer                 : count=  63, avg error=680.188%, std dev=209.531

Period: 1ms
CreateTimerQueueTimer    : count= 999, avg error= 4.185%, std dev=11.480
timeSetEvent             : count= 999, avg error= 1.268%, std dev= 1.695
Busy wait                : count= 999, avg error= 0.225%, std dev= 3.743
SetTimer                 : count=  62, avg error=1450.387%, std dev=415.344

What can we say about that?

  • Obviously, inaccuracy raises as the period gets smaller.
  • SetTimer is really inaccurate. Even with a 100ms period it was off by 10%, and below that it’s just useless.
  • Busy wait was of course the most accurate.
  • CreateTimerQueueTimer was remarkably worse than the “obsolete” timeSetEvent! What’s going on here?
  • timeSetEvent was performing really well even with the smallest periods. Errors below 1%, low standard deviation. Deprecated my ass.

So there you have it. I’d recommend using timeSetEvent for good precision, and if that’s not enough for you – timeSetEvent with short busy wait at the end to refine the result.

And the source code:

Windows timers microbenchmark code Show

Roguelike-like on strike

I’ve been slacking a bit lately. My excuse is that I’ve been playing some really interesting games.

Let’s start with Breath of Death VII: The Beginning. I got it along with Cthulhu Saves the World, but wanted to play the weaker thing first. Both are retro-style hack&slash RPGs with a healthy dose of humour. In Breath of Death VII you take control of a skeletal knight DEM in a post-apocalyptic world populated by undead. You assemble a party, kill monsters and save the world (sort of). It was fun, althought the fighting became a bit tiresome at the end. Still, Cthulhu is supposed to be even better. Both games cost almost nothing, so get them!



~ ~ ~

Next up: Spelunky. I stumbled upon it while reading old Rock, Paper, Shotgun archives. It can be described as a roguelike-inspired platformer. Like randomly-generated dungeons, death lurking everywhere and fiendish difficulty? Enjoying being eaten alive by man-eating plants, shot to death by angry shopkeepers, mauled by cavemen, exploded, crushed, torn by spikes and traps? Get it! It’s free! Spelunky features nice pixel art, great music and is a work of just one man: Derek Yu. Bonus: if you’d rather watch other people being mutilated by this game, see this. It’s hilarious.



~ ~ ~

Finally, there is The Binding of Isaac. I’ve heard about it before, but thought that its arcade-shootery action isn’t for me. Boy, was I wrong. I got myself Voxatron at the Humble Bundle, and a day later they added Isaac to the package. Now I had no excuse to try it. It’s a grotesque-themed twin-stick shooter mixed with a roguelike. You play as Isaac (or other unlockable characters) journeying through a randomly-generated basement filled with his personified childhood fears. Gather (random) powerups, collect unique items, kill (lots of) stuff. It’s weird, it’s fun, it’s addictive. I needed all my willpower to stop myself clicking that “try again” button. Get it!




Let there be HFONT

After some preliminary refactoring of PuTTY Recorder’s code I started it and was greeted with an unresponsive window. It wasn’t completely locked, just very, very slow at updating. First thought – some kind of rampant recursion in OnPaint handler, but that would lock the window up for good. So I launched the profiler… and found nothing unusual. Dialog procedure was on top of inclusive samples of course, and then just updating and painting – well duh, I knew that already. Tried to profile with instrumentation instead of sampling in hope of getting better accuracy, but that also got me nowhere. Huh. I stepped through the program initialization and everything seemed to be in order. Oh well, time for the best debugging method – DebugPrint()! I added print statements on top of the most suspicious functions and restarted the application.

PuttyRecDlg::OnPaint
TerminalFrame::Draw
PuttyRecDlg::OnUpdate
PuttyRecDlg::OnPaint
TerminalFrame::Draw
PuttyRecDlg::OnUpdate
PuttyRecDlg::OnPaint
TerminalFrame::Draw

Uh-oh. As I watched the output, I realized that each Draw call took seconds to complete. What the hell? It’s just some text rendering and one bitmap blit. It worked perfectly before. The only major change was that I added the ability for user to select font used for terminal rendering.

void TerminalFrame::Draw(HDC hdc, UINT startx, UINT starty, UINT char_width, UINT char_height) const
{
    if (m_font == 0)
        THROW("No font selected");

    SelectObject(hdc, m_font);

Font wasn’t null since I wasn’t getting exceptions. Checked manually – yeah, SelectObject() call was successful. Then I realized that despite font being seemingly OK, no text was appearing in the window. What gives? I ran the program again, waited agonizingly until it stopped updating and selected some font manually. Lo and behold, that seemed to fix everything – window became responsive and text was visible. Let’s see where the font is being created for the first time then.

void TerminalFrame::SetFont(const LOGFONT* logfont)
{
    if (m_font)
    {
        DeleteObject(m_font);
        m_font = 0;
    }
    m_font = CreateFontIndirect(logfont);
    if (!m_font)
        THROW("CreateFont failed");
}

I put a breakpoint there and restart. What do you know, logfont is uninitialized. Apparently I forgot to set it to something sensible. But wait a second. If logfont is garbage, how does CreateFontIndirect() work? It returns something and is NOT apparently failing. Just when you use the resulting font, text operations are excruciatingly slow (and don’t do much).

So, TLDR: You can create a font from nothing, but don’t expect it to work.

Training, rats and other stuff

Last week was pretty exhausting. I was at a boring (but supposedly necessary) training, the place was far from my apartment and I felt like crap for the whole week. Also we’ve got one more rat, Sznurówka (in addition to the current three sisters: Szufla, Szpadla and Szatan), which was in a pretty bad shape and needed a lot of care to recover. All is well now though and soon we’ll attempt to join the lot together. Szufla, the most brave of all, has been through her own ordeal before: a mammary tumor developed on her in just a few hours. She underwent a surgery and a week in isolation, but is now back to full health. Rats are amazing.

As for coding, I’ve decided to “renovate” and enhance one of my older projects – PuTTY Recorder. I’ve written it few years ago to record “console movies” of me and other people playing ADoM on the public SSH server. It could use code cleanup and some UI enhancements though. I’ll release sources once I’m done with it.

I’ve also played Uplink again. It’s a great little game, described as a “simulator of the cinematic depiction of computer hacking” by Wikipedia. This time I managed to complete the story mission arc. If you like(d) the game, I’d also recommend reading this nice piece of fan fiction.

And some rat pictures:

PS. Translation of the names (don’t ask :D)
Szufla – Shovel
Szpadla – Spade
Szatan – Satan
Sznurówka – Shoelace

IDisposable and you

After cleaning up the disposing logic in Gwen.Net I may as well link this excellent article on the subject: DG Update: Dispose, Finalization, and Resource Management. It’s long, very detailed and signed by some of the best .NET programmers out there. Read it!

Yes, I know I shouldn’t throw exceptions in finalizers, I left them in Debug build to better catch object leaks. Will provide it as an option later.