Nerdworks logo "The nerd shall inherit the earth."

Nerdworks Blogorama

Nerdspeak

Iterating over a std::tuple
Technobabble
2/23/2014 11:55:48 AM  

I’ve been trying to wrap my brain around the new variadic templates capability in C++11 lately and wondered if it would be possible to write a generic routine to iterate over the members of a std::tuple. I decided to start with the simple case of printing out the members of a tuple to the console via std::cout. First, I came up with a compile time recursive definition of a variadic template struct that overloads the function call operator like so. You might wonder why I didn’t stick with a plain variadic template function instead. That will become evident in a moment.

template<int index, typename... Ts>
struct print_tuple {
    void operator() (tuple<Ts...>& t) {
        cout << get<index>(t) << " ";
        print_tuple<index - 1, Ts...>{}(t);
    }
};

Clearly, we’ll need to define a base case to break out of the compile time recursion for the print_tuple call in the function call operator overload – which in this case would be when the non-type template parameter index is equal to zero. So I went ahead and defined the following partial specialization of print_tuple specializing on the value zero for the non-type template parameter index.

template<typename... Ts>
struct print_tuple<0, Ts...> {
    void operator() (tuple<Ts...>& t) {
        cout << get<0>(t) << " ";
    }
};

The reason I had to use a variadic template struct instead of a regular variadic template function is that it is not possible to partially specialize a template function in C++. Using a struct/class allows us to do so. Now that we have this setup, we can write a utility print routine to wrap calls to print_tuple and we should be done. Here’s a complete example:

#include <iostream>
#include <tuple>

using namespace std;

template<int index, typename... Ts>
struct print_tuple {
    void operator() (tuple<Ts...>& t) {
        cout << get<index>(t) << " ";
        print_tuple<index - 1, Ts...>{}(t);
    }
};

template<typename... Ts>
struct print_tuple<0, Ts...> {
    void operator() (tuple<Ts...>& t) {
        cout << get<0>(t) << " ";
    }
};

template<typename... Ts>
void print(tuple<Ts...>& t) {
    const auto size = tuple_size<tuple<Ts...>>::value;
    print_tuple<size - 1, Ts...>{}(t);
}

int main() {
    auto t = make_tuple(1, 2, "abc", "def", 4.0f);
    print(t);

    return 0;
}

All that print does is to first determine the size of the tuple via tuple_size and then instantiates print_tuple and invokes the function call operator passing the tuple object in question. As you might have noticed we are essentially working our way backwards till we hit the base case where index is zero – i.e. it’ll print the tuple members in the reverse order. Here’s the output this produces:

4 def abc 2 1

Iterate in order?

I figured, implementing a version that iterates over the tuple members in the order that they are specified should be fairly straightforward. We should just need to define a different base case (for the last item in the tuple instead of the first) and the recursive implementation should simply increment the index instead of decrementing it. Here’s what I came up with:

template<int index, typename... Ts>
struct print_tuple {
    void operator() (tuple<Ts...>& t) {
        cout << get<index>(t) << " ";
        print_tuple<index + 1, Ts...>{}(t);
    }
};

template<typename... Ts>
struct print_tuple<tuple_size<tuple<Ts...>>::value - 1, Ts...> {
    void operator() (tuple<Ts...>& t) {
        cout << get<tuple_size<tuple<Ts...>>::value - 1>(t) << " ";
    }
};

The code shown in bold above are the changes of interest. In particular, turns out, the base case definition which handles the situation when print_tuple is being instantiated with an index that is equal to the size of the tuple minus one, is not really valid C++. Non-type template specialization in C++ can only be done using “simple identifiers”. The expression tuple_size<tuple<Ts...>>::value - 1 is a compile time constant for sure and ideally the compiler should be able to compute that value (which in fact it does for the code in the body of that method definition) but, well, it doesn’t! So, we’re kind of out of luck there.

Generalized iteration?

One might imagine that it should be possible to generalize this iteration (even if it can be done in reverse order only) so that we are able to supply arbitrary callbacks for processing tuple members. Turns out this again, is not possible which I think is reasonable. Because at that point one really needs to question whether using a tuple is the correct choice – a std::vector or std::list or some of the other containers maybe a more appropriate option. Having said that, you might be thinking that we should still be able to generalize this by adding another template parameter for a callback routine and passing in a template function for that parameter. Maybe something like this?

#include <iostream>
#include <tuple>

using namespace std;

template<int index, typename TCallback, typename... Ts>
struct iterate_tuple {
    void operator() (tuple<Ts...>& t, TCallback callback) {
        callback(get<index>(t));
        iterate_tuple<index - 1, TCallback, Ts...>{}(t, callback);
    }
};

template<typename TCallback, typename... Ts>
struct iterate_tuple<0, TCallback, Ts...> {
    void operator() (tuple<Ts...>& t, TCallback callback) {
        callback(get<0>(t));
    }
};

template<typename TCallback, typename... Ts>
void for_each(tuple<Ts...>& t, TCallback callback) {
    iterate_tuple<tuple_size<tuple<Ts...>>::value - 1, TCallback, Ts...> it;
    it(t, callback);
}

template<typename T>
void print(T v) {
    cout << v << " ";
}

int main() {
    auto t = make_tuple(1, 2, "abc", "def", 4.0f);
    for_each(t, print);

    return 0;
}

This won’t work because the compiler needs to know the type of TCallback when it is instantiating the for_each template function which in this case it doesn’t and neither do we know the type because we want a different version of print to be used for each unique type in the tuple. If there is some way of telling the compiler to postpone resolution of TCallback till it is actually used then this might have worked. As far as I know, that isn’t possible. But then I might be wrong. If you know a way of doing that, it’ll be great if you could please let me know in the comments.

Link Comment
 
Playing in-memory audio streams on Windows 8
Technobabble
12/29/2013 7:46:09 AM  

A customer I’d been working with recently came up with a support request for a Windows 8 Store app they’d been working on. They were building the app using the HTML/CSS/JS stack and wanted the ability to play audio streams completely from memory instead of loading it up from a file on the file system or a network stream. They needed this because their service implemented a custom Digital Rights Management (DRM) system where the audio content was encrypted and this needed to be decrypted before playback (duh!). They wanted however, to perform this decryption on the fly during playback instead of creating a decrypted version of the content on the file system. In this post I talk about a little sample I put together for them showing how you can achieve this on Windows 8. If you prefer to directly jump into the code and take a look at things on your own, then here’s where its at:

https://github.com/avranju/AudioPlayerWithCustomStream

Playing media streams from memory

The primary requirement proved to be fairly straightforward to accomplish. Turns out, there already exists an SDK sample showing exactly this. The sample shows how to achieve media playback from memory streams using the Windows.Media.Core.MediaStreamSource object. Briefly, here are the steps:

  1. First you go fetch some metadata from the media stream. In case of audio content, this turns out to be the sample rate, encoding bit rate, duration and number of channels. For file based audio sources, the Windows.Storage.StorageFile object has the ability to extract this information from the file directly via Windows.Storage.StorageFile.Properties.RetrievePropertiesAsync. Here’s an example function that accepts a StorageFile object as input and then extracts and returns the said metadata from it.
    function loadProps(file) {
        var props = {
            fileName: "",
            sampleRate: 0,
            bitRate: 0,
            channelCount: 0,
            duration: 0
        };
    
        // save file name
        props.fileName = file.name;
        return file.properties.getMusicPropertiesAsync().then(
         function (musicProps) {
            // save duration
            props.duration = musicProps.duration;
    
            var encProps = [
                "System.Audio.SampleRate",
                "System.Audio.ChannelCount",
                "System.Audio.EncodingBitrate"
            ];
    
            return file.properties.retrievePropertiesAsync(encProps);
        }).then(function (encProps) {
            // save encoding properties
            props.sampleRate = 
               encProps["System.Audio.SampleRate"];
            props.bitRate =
               encProps["System.Audio.EncodingBitrate"];
            props.channelCount =
               encProps["System.Audio.ChannelCount"];
    
            return props;
        });
    }
  2. Wrap the metadata gathered in step 1 in a Windows.Media.MediaProperties.AudioEncodingProperties object which in turn is then wrapped in a Windows.Media.Core.AudioStreamDescriptor object.
  3. Use the AudioStreamDescriptor object to initialize a MediaStreamSource instance and setup event handlers for the MediaStreamSource’s Starting, SampleRequested and Closed events. As you might imagine, the idea is to respond to these events by handing out audio data to the MediaStreamSource which then proceeds to play that content.

This is all fine and dandy, but how do we get this to work when the audio content is stored in memory in an Windows.Storage.Streams.InMemoryRandomAccessStream object? The challenge of course is in extracting the metadata we need to setup a MediaStreamSource object.

StorageFile can read from arbitrary streams?

As it happens, the StorageFile object has direct support for having it powered by an arbitrary stream (or pretty much anything really). I figured I’ll hook up a StorageFile with an InMemoryRandomAccessStream object and have it extract the metadata that I needed. Here’s how you connect a StorageFile with data fetched from any arbitrary source – in this case, just a string constant. You create a StorageFile object by calling StorageFile.CreateStreamedFileAsync. CreateStreamedFileAsync requires that you pass a reference to a callback routine which is expected to supply the data the StorageFile object needs when it is first accessed. Here’s a brief example:

function init() {
    var reader;
    var size = 0;

    Windows.Storage.StorageFile.createStreamedFileAsync(
           "foo.txt", generateData, null).then(
       function (file) {
        // open a stream on the file and read the data;
        // this will cause the StorageFile object to
        // invoke the "generateData" function
        return file.openReadAsync();
    }).then(function (stream) {
        var inputStream = stream.getInputStreamAt(0);
        reader = new Windows.Storage.Streams.DataReader(inputStream);
        size = stream.size;
        return reader.loadAsync(size);
    }).then(function () {
        var str = reader.readString(size);
        console.log(str);
    });
}

function generateData(stream) {
    var writer = new Windows.Storage.Streams.DataWriter();
    writer.writeString("Some arbit random data.");

    var buffer = writer.detachBuffer();
    writer.close();

    stream.writeAsync(buffer).then(function () {
        return stream.flushAsync();
    }).done(function () {
        stream.close();
    });
}

The problem however, as I ended up discovering, is that StorageFile objects that work off of a stream created in this fashion do not support retrieval of file properties via StorageFile.Properties.RetrievePropertiesAsync or for that matter StorageFile.Properties.GetMusicPropertiesAsync. So clearly, this approach is not going to work. Having said that its useful to know that this technique is possible at all with StorageFile objects as it allows you to defer performing the actual work of producing the data represented by the StorageFile object till it is actually needed. And being a bona fide Windows Runtime object you can confidently pass this around wherever a StorageFile object is accepted – for instance when implementing a share source contract you might hand out a StorageFile object created in this manner via Windows.ApplicationModel.DataTransfer.DataPackage.SetStorageItems.

Reading music metadata using the Microsoft Media Foundation

After a bit of research I discovered that there is another API that can be used for fetching metadata from media streams (among other things) called the Microsoft Media Foundation. In particular, the API features an object called the source reader that can be used to get the data we are after. The trouble though is that this is a COM based API and cannot therefore be directly invoked from JavaScript. I decided to write a little wrapper Windows Runtime component in C++ and then use that from the JS app. After non-trivial help from my colleague Chris Guzak and others directly from the Media Foundation team at Microsoft (perks of working for Microsoft I guess!) we managed to put together a small component that allows us to read the required meta data from an InMemoryRandomAccessStream object. Here’s relevant snippet that does the main job (stripped out all the error handling code to de-clutter the code):

MFAttributesHelper(InMemoryRandomAccessStream^ stream, String^ mimeType)
{
    MFStartup(MF_VERSION);

    // create an IMFByteStream from "stream"
    ComPtr<IMFByteStream> byteStream;
    MFCreateMFByteStreamOnStreamEx(
           reinterpret_cast<IUnknown*>(stream),
           &byteStream);

    // assign mime type to the attributes on this byte stream
    ComPtr<IMFAttributes> attributes;
    byteStream.As(&attributes);
    attributes->SetString(
           MF_BYTESTREAM_CONTENT_TYPE,
           mimeType->Data());

    // create a source reader from the byte stream
    ComPtr<IMFSourceReader> sourceReader;
    MFCreateSourceReaderFromByteStream(
           byteStream.Get(),
           nullptr,
           &sourceReader);

    // get current media type
    ComPtr<IMFMediaType> mediaType;
    sourceReader->GetCurrentMediaType(
           MF_SOURCE_READER_FIRST_AUDIO_STREAM,
           &mediaType);

    // get all the data we're looking for
    PROPVARIANT prop;
    sourceReader->GetPresentationAttribute(
           MF_SOURCE_READER_MEDIASOURCE,
           MF_PD_DURATION,
           &prop);
    Duration = prop.uhVal.QuadPart;

    UINT32 data;
    sourceReader->GetPresentationAttribute(
           MF_SOURCE_READER_MEDIASOURCE,
           MF_PD_AUDIO_ENCODING_BITRATE,
           &prop);
    BitRate = prop.ulVal;

    mediaType->GetUINT32(
           MF_MT_AUDIO_SAMPLES_PER_SECOND,
           &data);
    SampleRate = data;

    mediaType->GetUINT32(
           MF_MT_AUDIO_NUM_CHANNELS,
           &data);
    ChannelCount = data;
}

This is the implementation of the constructor on the MFAttributesHelper ref class. As you can tell, the constructor accepts a reference to an instance of an InMemoryRandomAccessStream object and the MIME type of the content in question and proceeds to extract the duration, encoding bitrate, sample rate and channel count from it. It does this by first creating an IMFByteStream object via the convenient MFCreateMFByteStreamOnStreamEx function which basically wraps an IRandomAccessStream object (which InMemoryRandomAccessStream implements) and returns an IMFByteStream instance. The object returned by MFCreateMFByteStreamOnStreamEx also implements IMFAttributes which we then QueryInterface for (via ComPtr::As) and assign the MIME type value to it. Next we instantiate an object that implements IMFSourceReader via MFCreateSourceReaderFromByteStream and use that instance to fetch the duration and encoding bitrate values via the GetPresentationAttribute method. And finally, we retrieve an object that implements the IMFMediaType interface via IMFSourceReader::GetCurrentMediaType and use that object to fetch the sample rate and the channel count values. Once you know how to do all this, it seems quite trivial of course but getting here, believe me, took some doing!

Now that we have this component, reading the metadata from JavaScript proves to be fairly straightforward. Here’s an example. In the code below, memoryStream is an InMemoryRandomAccessStream instance and mimeType is a string with the MIME type of the content:

var helper = MFUtils.MFAttributesHelper.create(memoryStream, mimeType);

// now, helper's sampleRate, bitRate, duration and channelCount
// properties contain the data we are looking for

Now with the metadata handy, we simply follow the steps as outlined earlier in this post to commence playback. As mentioned before the sample is hosted up on Github here:

https://github.com/avranju/AudioPlayerWithCustomStream

For the sake of the sample, I took a plain MP3 file and applied a XOR cipher on it and then loaded it up and played back from memory applying another XOR transform on the bits before playback. It all works rather well together and again, hat-tip to Chris Guzak for all his help in whittling down the WinRT component down to its essence and really cleaning up its interface!

Link Comment
 
Add a "Web Server Here" Explorer shell extension command
Technobabble
9/29/2013 6:25:16 AM  

Sometimes I just want to spin up a web server on a folder in explorer.  Often its because browsers get nervous about running HTML pages directly off the file system and seem to feel more comfortable when its served from a web server.  I figured I’d had enough of writing little scripts or using IIS to create virtual folders every time and wanted a context menu option in Windows Explorer that’ll just launch a web server pretty much anywhere I wanted.  Here’s what it’ll look like:

image

Turns out, this is fairly straightforward to accomplish using IIS Express and some registry tweaks.  For those of you who don’t know, IIS Express is this light weight, self-contained version of IIS meant to be used for developing, debugging and testing web apps.  When you use Visual Studio, the web apps themselves run inside IIS Express when you hit F5.  Being self-contained, we are able to run a web server pretty much anywhere from a command prompt.  Documentation on how to do this is available here.  You can download and install IIS Express either via the Web Platform Installer or from here (IIS Express version 8.0 at the time of writing).   If your web app files are located in say, D:\Code\Web\Foo then you’d run a web server from that location like so:

"C:\Program Files (x86)\IIS Express\iisexpress.exe" /path:"D:\Code\Web\Foo" /port:8080 /systray:true

The path to iisexpress.exe might be different if you’re running on a 32-bit system.  It’ll just be “Program Files” instead of “Program Files (x86)”.  Once you’ve run the command, the web server starts up and you can load your web app in your favorite browser by navigating to http://localhost:8080/.  The next step is to integrate this into the Explorer shell so you can run this from wherever you want directly from Explorer.  Phil Haack has written up a post on how to do this with the web server that Visual Studio 2008 used to ship with way back in, well, 2008.  I adapted the basic steps described there to make it work with IIS Express.  Now, setting this up involves editing the Windows Registry, so please be careful with what you do.  This works on my machine and that’s about all I am willing to say!

works-on-my-machine-starburst_3

If you’re on a 64-bit installation of Windows, here’re the changes you need to do to your registry:

[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\Directory\shell\IISExpressWebServer]
@="Web Sever Here"

[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\Directory\shell\IISExpressWebServer\command]
@="C:\\Program Files (x86)\\IIS Express\\iisexpress.exe /path:\"%1\" /port:8080 /systray:true"

And if you’re on a 32-bit installation, then this is what you need to do:

[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\Directory\shell\IISExpressWebServer]
@="Web Sever Here"

[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\Directory\shell\IISExpressWebServer\command]
@="C:\\Program Files\\IIS Express\\iisexpress.exe /path:\"%1\" /port:8080 /systray:true"

If you need .reg files so you can just double-click to import them into your registry then they are available here.  You might want to edit the .reg files in case your installation paths are different from what’s given there.  That’s pretty much it!

Link Comment
 
Some notes on C++11 lambda functions
Technobabble
7/29/2013 6:25:11 PM  

Lambda functions are a new capability introduced in C++ that offers a terse compact syntax for defining functions at the point of their use.  Bjarne Stroustrup says that C++11, which is the latest ratified revision of the C++ standard, “feels like a new language”.  I think lambda functions are a big part of what makes the language feel so very different from C++03.  Lambda functions basically allow you to do things like this:

vector<int> nums { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 };
auto evens = count_if(begin(nums), end(nums), [](int num) {
    return (num % 2) == 0;
});

The third parameter passed to the standard count_if function is a predicate that is expected to return true if the value passed to it satisfies the condition and false otherwise.  In the snippet above we simply count the number of instances of even numbers in the collection.  Search for “C++ lambdas” on your favorite search engine and you should get plenty of material out there talking about this feature.  What follows in this post are some notes on certain aspects of C++ lambdas that I happened to notice as I was learning about them listed in no particular order.

  1. You can pass a lambda object around as you would pretty much anything else. Here’s a made up example showing how you can pass a lambda as an argument to another function.

    #include <iostream>
    
    using namespace std;
    
    template <typename T>
    void call(T);
    
    int main() {
      auto fn = []() { cout<<"Lambda"<<endl; };
      call(fn);
      return 0;
    }
    
    template <typename T>
    void call(T fn) {
      fn();
    }
  2. You can return lambdas from functions like any another object which makes for some interesting possibilities such as the following:

    #include <iostream>
    #include <functional>
    
    using namespace std;
    
    template<typename T>
    function<T()> makeAccumulator(T& val, T by) {
        return [=,&val]() {
            return (val += by);
        };
    }
    
    int main() {
        int val = 10;
        auto add5 = makeAccumulator(val, 5);
        cout<<add5()<<endl;
        cout<<add5()<<endl;
        cout<<add5()<<endl;
        cout<<endl;
    
        val = 100;
        auto add10 = makeAccumulator(val, 10);
        cout<<add10()<<endl;
        cout<<add10()<<endl;
        cout<<add10()<<endl;
    
        return 0;
    }

    Which produces the following output:

    15
    20
    25
    
    110
    120
    130

    The key thing to remember here is that it is your responsibility to make sure that the values you capture in a lambda remain in memory for the lifetime of the lambda itself. The compiler will not for instance, prevent you from capturing local variables by reference in a lambda and if you continue to access a variable that is no longer available, well, then the behavior is undefined.

  3. Simply defining a lambda causes all variables captured by value by the lambda to be copy constructed. You don't really have to have any code that invokes the lambda in order for the variables to be copied. This is consistent with the idea that creating a lambda function essentially creates a function object which has as instance members the variables that have been captured in the lambda. Here's an example:

    #include <iostream>
    
    using namespace std;
    
    class Foo {
    public:
      Foo() {
        cout<<"Foo::Foo()"<<endl;
      }
    
      Foo(const Foo& f) {
        cout<<"Foo::Foo(const Foo&)"<<endl;
      }
    
      ~Foo() {
        cout<<"Foo~Foo()"<<endl;
      }
    };
    
    int main() {
      Foo f;
      auto fn = [f]() { cout<<"lambda"<<endl; };
      cout<<"Quitting."<<endl;
      return 0;
    }

    Here's the output this produces:

    Foo::Foo()
    Foo::Foo(const Foo&)
    Quitting.
    Foo~Foo()
    Foo~Foo()

    As you can tell, the copy constructor gets invoked even though the lambda itself never gets invoked.

  4. As an extension of the previous point, if you capture objects by value in a lambda and then proceed to pass that lambda around to other functions by value, then the variables in the closure will also get copy constructed.

  5. If you reference capture a const local variable, it becomes a const reference in the lambda.

    #include <iostream>
    
    using namespace std;
    
    int main() {
        const int val = 10;
        auto f1 = [&val]() {
            val = 20;  // won't compile
            cout<<"val = "<<val<<endl;
        };
        f1();
    
        return 0;
    }
  6. In general, when you wish to declare a variable that can hold a reference to a lambda in contexts where auto is not permissible (for e.g. function return types or arguments) use std::function. An example:
    #include <iostream>
    #include <functional>
    
    using namespace std;
    
    function<bool(int)> makeLEPredicate(int max) {
        return [max](int val) -> bool {
            return val <= max;
        };
    }
    
    int main() {
        auto le10 = makeLEPredicate(10);
        cout<<le10(8)<<endl;
    
        return 0;
    }

    You could alternatively use function templates to achieve the same thing if the semantics of using templates makes sense to your use case.

That’s all for now. This list might get expanded as I explore lambdas further. As you might have noticed, the ability to use lambdas really does make a significant difference to productivity without sacrificing performance.

Link Comment (4)
 
Implementing variable sized tiles using WinJS ListView
Technobabble
6/16/2013 10:04:25 AM  

Windows Store apps on Windows 8 often use a grouped tile style for rendering user interfaces. The modern desktop on Windows 8 is a classic example. Here’s a zoomed out view of my current desktop for instance:

image

You’ll note that the tiles have been grouped into separate sections and each section contains tiles of different sizes. In this case there are only 2 sizes – a wide tile:

image

And a square tile:

image

Here’s an example of an app that uses different tile sizes in different groups:

image

I’d been meaning to write down exactly how we can customize the WinJS ListView to create interfaces such as this one and, well, here it is. The basic technique for implementing variable tiles with the WinJS ListView involves the following things:

  1. Determine what your “cell unit” is going to be.  This is the width and height of a single “unit” in pixels – the idea is that tile sizes must be a multiple of this.  For example, I might decide that my cell unit is going to be 15x20 pixels.  Then valid tile sizes would be 15x40, 30x20, 45x300 etc.  Once you know what this is, implement the groupInfo property on the GridLayout object on your list view’s layout like so.

    ready: function (element, options) {
        // more stuff here
        var layout = new WinJS.UI.GridLayout();
        layout.groupInfo = this.getGroupInfo.bind(this);
        // more stuff here
    },
    
    getGroupInfo: function () {
        return {
            enableCellSpanning: true,
            cellWidth: 15,
            cellHeight: 20
        }
    },
  2. Since different tiles in your control can be of different sizes you’ll need to tell the ListView what those sizes are going to be.  You do this by implementing a method called itemInfo on your GridLayout object.  The ListView calls itemInfo for every element it renders from the data source.  The important thing to remember is that the size you return from the itemInfo method must be a multiple of the size you returned from groupInfo.

    ready: function (element, options) {
        // more stuff here
        var layout = new WinJS.UI.GridLayout();
        layout.itemInfo = this.getItemInfo.bind(this);
        // more stuff here
    }
    
    getItemInfo: function (index) {
        var data = ImageData.imagesList.getAt(index);
        var size = {
            width: 150,
            height: 200,
            newColumn: false
        };
        if (data.group.name === ImageData.imageGroups.kittens.name) {
            size.height = 100;
        }
        else if (data.group.name === ImageData.imageGroups.portraits.name) {
            size.width = 120;
        }
    
        return size;
    },
  3. Associate a JS function for your list view’s itemTemplate property.  The job of this function is to render an item.

    listView.itemTemplate = this.selectItemTemplate.bind(this);

    It is passed a WinJS.Promise object as a parameter which when resolved will yield the data item which is to be rendered.  We can either manually create DOM elements using document.createElement from this routine or, as is more convenient, use declaratively pre-created WinJS.Binding.Template instances from the HTML mark-up.  Here’s an example implementation showing how to do this:

    selectItemTemplate: function(itemPromise, recycle) {
        return itemPromise.then(function (item) {
            var data = item.data;
            var template;
            if (data.group.name === ImageData.imageGroups.kittens.name) {
                template = document.querySelector("#wide-template").winControl;
            }
            else if (data.group.name === ImageData.imageGroups.portraits.name) {
                template = document.querySelector("#long-template").winControl;
            }
            else {
                template = document.querySelector("#default-template").winControl;
            }
    
            return template.render(item.data);
        });
    },

As you can tell we first wait for the promise to resolve and then take the data and do some custom template selection logic to pick a template from the DOM and then call its render method passing in the data object as binding context. You will need to ensure that the styling you use on your template mark-up matches up with the size you return from itemInfo as otherwise you might end up with blank spaces in your tiles where the styling doesn’t get applied (now, you might want to do this deliberately of course, in which case its totally fine). That’s pretty much it!

Link Comment
 
Debugging existing Windows Store apps
Technobabble
5/23/2013 12:59:02 PM  

Did you know that you can debug pretty much any installed store app on your machine?  Let’s say you want to know exactly why is it that the Windows Mail app acts funny sometimes.  Here’s what you’d do:

  1. Go to the modern desktop and type “Debuggable Package Manager” and launch it.

    clip_image001

    This opens up a powershell window.

  2. Run Get-AppxPackage to list the packages installed and use Where-Object to filter for what you’re looking for. Since were interested in the mail app we run this:

    Get-AppxPackage | Where-Object PackageFullName -like "*commu*"
  3. Note the value of the “PackageFullName” property and enable debugging by running this:

    Enable-AppxDebug microsoft.windowscommunicationsapps_17.0.1114.318_x64__8wekyb3d8bbwe
  4. Now launch the app.  Then launch Visual Studio, hit Ctrl+Alt+P and select the instance of WWAHost.exe which looks like the app you’re interested in.

    image

  5. Debug away!

    image

Link Comment
 
Screen scraping with your browser’s JavaScript console
Technobabble
5/5/2013 11:04:43 AM  

I needed to experiment a bit with language packs for IE 10 the other day and that involved downloading and installing all the available language packs. Unfortunately I couldn’t find a single convenient file for download that’d install everything. The language packs were available as separate downloads for each supported language. Like this:

image

This was a problem as I was in no mood to download each file individually and there were 100s of “download” buttons there. I figured I’d see if I can screen scrape the links from the DOM of this page and then write a little script to download all of them in one go. So I fired up an instance of IE and hit F12 to launch the developer tools and used the “Select element by click” button to quickly navigate to the markup associated with a “download” button.

select-element

As you can tell, all the download buttons are basically anchor tags and the href attribute points to the MSU file for that particular language. Also, you’ll note that each such anchor tag has a class called “download” applied on it. So I should be able to fetch all the links by simply iterating through all anchor tags which have the “download” class applied on them. I switched to the “Console” tab in the developer tools window and ran the following script:

document.querySelectorAll("a.download")

And sure enough this produced a list of all the anchor tags I was interested in. I needed the URL however and not the DOM elements themselves. So I ran this next:

Array.prototype.forEach.call(
    document.querySelectorAll("a.download"),
    function (a) {
        console.log(a.href);
    });

This produced a list of links such as this (snipped since there are quite a lot of them):

http://download.microsoft.com/download/D/9/A/.../IE10-Windows6.1-LanguagePack-x64-zh-tw.msu 
http://download.microsoft.com/download/D/9/A/.../IE10-Windows6.1-LanguagePack-x64-zu-za.msu 
http://download.microsoft.com/download/D/9/A/.../IE10-Windows6.1-LanguagePack-x86-af-za.msu 
http://download.microsoft.com/download/D/9/A/.../IE10-Windows6.1-LanguagePack-x86-am-et.msu

If you’re wondering why I had to iterate through each element in the list of nodes returned by querySelectorAll via Array.prototype.forEach.call then that’s because what querySelectorAll returns isn’t a JavaScript array object, i.e., it doesn’t inherit from Array.prototype. It is instead a NodeList object which looks a lot like an array! It has numeric properties starting from 0 to N-1 where N is the number of elements returned and it has a length property as well which is equal to N. It turns out that all the Array methods are perfectly capable of dealing with such “array like” objects just as well as genuine, certified JavaScript arrays. Here’s an example of what I am talking about:

var notArray = {
    0: "This ",
    1: "is ",
    2: "not ",
    3: "really ",
    4: "an ",
    5: "array.",
    length: 6
};

console.log(Array.prototype.reduce.call(
    notArray,
    function (previous, current) {
        return previous + current;
    },
    ""));

This snippet prints the following text to the console:

This is not really an array.

If you take another look at the list of URLs our script printed to the console, you’ll notice from the file names that this list includes both x86 files and x64 files. I wanted only x64 files. So, I next changed the script to this:

Array.prototype.forEach.call(
    document.querySelectorAll("a.download[href*=x64]"),
    function (a) {
        console.log(a.href);
    });

The selector syntax above looks for all anchor tags in the DOM which has a class called “download” applied where the href attribute’s value contains the string “x64”. I had first implemented this via another call to Array.prototype.filter before learning that CSS3 selector syntax already provides for it! Pretty nifty no? That’s pretty much it. I wanted to run a download script for fetching all the files so I slightly modified the script to produce wget calls like so:

Array.prototype.forEach.call(
    document.querySelectorAll("a.download[href*=x64]"),
    function (a) {
        console.log("wget " + a.href);
    });

And plonked the output into a batch file and ran it. Mission accomplished!

Now, it turned out that this particular page in question includes the jQuery library as well as can be seen when you pull up the files list from the “Script” tab in the developer console.

image

I could have done the same thing I did above using a slightly terser syntax using jQuery as well. Here’s how:

$("a.download[href*=x64]").each(function () {
    console.log("wget " + this.href);
});

Not having to resort to the Array.prototype weirdness does make the code a lot cleaner doesn’t it?

Link Comment
 
Building an Instagram clone – Part 2
Technobabble
4/19/2013 9:50:27 AM  

In part 1 we took a look at some of the UI layout implementation details of the InstaFuzz app.  You can get the source code for the app from here if you wish to run it locally.  In this installment we’ll take a look at some of the other bits such as how drag/drop, File API, Canvas and Web Workers are used.

Drag/Drop

One of the things that InstaFuzz supports is the ability to drag and drop image files directly on to the big blackish/blue box. Support for this is enabled by handling the “drop” event on the CANVAS element. When a file is dropped onto an HTML element the browser fires the “drop” event on that element and passes in a dataTransfer object which contains a files property that contains a reference to the list of files that were dropped. Here’s how this is handled in the app (“picture” is the ID of the CANVAS element on the page):

var pic = $("#picture");
pic.bind("drop", function (e) {
    suppressEvent(e);
    var files = e.originalEvent.dataTransfer.files;
    // more code here to open the file
});
pic.bind("dragover", suppressEvent).bind("dragenter", suppressEvent);
function suppressEvent(e) {
    e.stopPropagation();
    e.preventDefault();
}

The files property is a collection of File objects that can then subsequently be used with the File API to access the file contents (covered in the next section). We also handle the dragover and dragenter events and basically prevent those events from propagating to the browser thereby preventing the browser from handling the file drop. IE for instance might unload the current page and attempt to open the file directly otherwise.

File API

Once the file has been dropped, the app attempts to open the image and render it in the canvas. It does this by using the File API. The File API is a W3C specification that allows web apps to programmatically access files from the local file system in a secure fashion. In InstaFuzz we use the FileReader object to read the file contents as a data URL string like so using the readAsDataURL method:

var reader = new FileReader();
reader.onloadend = function (e2) {
    drawImageToCanvas(e2.target.result);
};
reader.readAsDataURL(files[0]);

Here, files is the collection of File objects retrieved from the function handling the “drop” event on the CANVAS element. Since we are interested only in a single file we simply pick the first file from the collection and ignore the rest if there are any. The actual file contents are loaded asynchronously and once the load completes, the onloadend event is fired where we get the file contents as a data URL which we then subsequently draw on to the canvas.

Rendering the filters

Now the core functionality here is of course the application of the filters. In order to be able to apply the filter to the image we need a way to access the individual pixels from the image. And before we can access the pixels we need to have actually rendered the image on to our canvas. So let’s first take a look at the code that renders the image that the user picked on to the canvas element.

Rendering images on to the canvas
The canvas element supports the rendering of Image objects via the drawImage method. To load up the image file in an Image instance, InstaFuzz uses the following utility routine:
App.Namespace.define("InstaFuzz.Utils", {
    loadImage: function (url, complete) {
        var img = new Image();
        img.src = url;
        img.onload = function () {
            complete(img);
        };
    }
});

This allows the app to load up image objects from a URL using code such as the following:

function drawImageToCanvas(url) {
    InstaFuzz.Utils.loadImage(url, function (img) {
        // save reference to source image
        sourceImage = img;

        mainRenderer.clearCanvas();
        mainRenderer.renderImage(img);

        // load image filter previews
        loadPreviews(img);
    });
}

Here, mainRenderer is an instance created from the FilterRenderer constructor function defined in filter-renderer.js. The app uses FilterRenderer objects to manage canvas elements – both in the preview pane as well as the main canvas element on the right. The renderImage method on the FilterRenderer has been defined like so:

FilterRenderer.prototype.renderImage = function (img) {
    var imageWidth = img.width;
    var imageHeight = img.height;
    var canvasWidth = this.size.width;
    var canvasHeight = this.size.height;
    var width, height;

    if ((imageWidth / imageHeight) >= (canvasWidth / canvasHeight)) {
        width = canvasWidth;
        height = (imageHeight * canvasWidth / imageWidth);
    } else {
        width = (imageWidth * canvasHeight / imageHeight);
        height = canvasHeight;
    }

    var x = (canvasWidth - width) / 2;
    var y = (canvasHeight - height) / 2;
    this.context.drawImage(img, x, y, width, height);
};

That might seem like a lot of code but all it does ultimately is to figure out the best way to render the image in the available screen area considering the aspect ratio of the image. The key piece of code that actually renders the image on the canvas occurs on the last line of the method. The context member refers to the 2D context acquired from the canvas object by calling its getContext method.

Fetching pixels from the canvas
Now that the image has been rendered we will need access to the individual pixels in order to apply all the different filters that are available. This is easily acquired by calling getImageData on the canvas’s context object. Here’s how InstaFuzz calls this from instafuzz.js.
var imageData = renderer.context.getImageData(
    0, 0,
    renderer.size.width,
    renderer.size.height);

The object returned by getImageData provides access to the individual pixels via its data property which in turn is an array like object that contains a collection of byte values where each value represents the color rendered for a single channel of a single pixel. Each pixel is represented using 4 bytes that specify values for the red, green, blue and alpha channels. It also has a length property that returns the length of the buffer. If you have a 2D co-ordinate you can easily transform that into an index into this array using code such as the following. The color intensity values of each channel ranges from 0 through 255. Here’s the utility function from filters.js that accepts as input an image data object along with 2D coordinates for the pixel the caller is interested in and returns an object containing the color values:

function getPixel(imageData, x, y) {
    var data = imageData.data, index = 0;

    // normalize x and y and compute index
    x = (x < 0) ? (imageData.width + x) : x;
    y = (y < 0) ? (imageData.height + y) : y;
    index = (x + y * imageData.width) * 4;

    return {
        r: data[index],
        g: data[index + 1],
        b: data[index + 2]
    };
}
Applying the filters
Now that we have access to the individual pixels, applying the filter is fairly straightforward. Here, for instance is the function that applies a weighted grayscale filter on the image. It simply picks intensities from the red, green and blue channels and sums them up after applying a multiplication factor on each channel and then assigns the result for all 3 channels.
// "Weighted Grayscale" filter
Filters.addFilter({
    name: "Weighted Grayscale",
    apply: function (imageData) {
        var w = imageData.width, h = imageData.height;
        var data = imageData.data;
        var index;
        for (var y = 0; y < h; ++y) {
            for (var x = 0; x < w; ++x) {
                index = (x + y * imageData.width) * 4;
                var luminance = parseInt((data[index + 0] * 0.3) +
                                         (data[index + 1] * 0.59) +
                                         (data[index + 2] * 0.11));
                	    data[index + 0] = data[index + 1] =
                    data[index + 2] = luminance;
            }

            Filters.notifyProgress(imageData, x, y, this);
        }

        Filters.notifyProgress(imageData, w, h, this);
    }
});

Once the filter has been applied we can have that reflected on the canvas by calling the putImageData method passing in the modified image data object. While the weighted grayscale filter is fairly simple most of the other filters use an image processing technique known as convolution. The code for all the filters is available in filters.js and the convolution filters were ported from the C code available here.

Web Workers

As you might imagine doing all this number crunching to apply the filters can potentially take a long time to complete. The motion blur filter for instance uses a 9x9 filter matrix for computing the new value for every single pixel and is in fact the most CPU intensive filter among them all. If we were to do all this computation on the UI thread of the browser then the app would essentially freeze every time a filter was being applied. To provide a responsive user experience the app delegates the core image processing tasks to a background script using the support for W3C Web Workers in modern browsers.

Web workers allow web applications to have scripts run in a background task that executes in parallel along with the UI thread. Communication between the worker and the UI thread is accomplished by passing messages using the postMessage API. On both ends (i.e. the UI thread and the worker) this manifests as an event notification that you can handle. You can only pass “data” between workers and the UI thread, i.e., you cannot pass anything that has to do with the user interface – you cannot for instance, pass DOM elements to the worker from the UI thread.

In InstaFuzz the worker is implemented in the file filter-worker.js. All it does in the worker is handle the onmessage event and apply a filter and then pass the results back via postMessage. As it turns out, even though we cannot pass DOM elements (which means we cannot just hand a CANVAS element to the worker to have the filter applied) we can in fact pass the image data object as returned by the getImageData method that we discussed earlier. Here’s the filter processing code from filter-worker.js:

importScripts("ns.js", "filters.js");

var tag = null;
onmessage = function (e) {
    var opt = e.data;
    var imageData = opt.imageData;
    var filter;
    
    tag = opt.tag;
    filter = InstaFuzz.Filters.getFilter(opt.filterKey);

    var start = Date.now();
    filter.apply(imageData);
    var end = Date.now();

    postMessage({
        type: "image",
        imageData: imageData,
        filterId: filter.id,
        tag: tag,
        timeTaken: end - start
    });
}

The first line pulls in some script files that the worker depends on by calling importScripts. This is similar to including a JavaScript file in a HTML document using the SCRIPT tag. Then we set up a handler for the onmessage event in response to which we simply apply the filter in question and pass the result back to the UI thread by calling postMessage. Simple enough!

The code that initializes the worker is in instafuzz.js and looks like this:

var worker = new Worker("js/filter-worker.js");

Not much is it? When a message is sent by the worker to the UI thread we handle it by specifying a handler for the onmessage event on the worker object. Here’s how this is done in InstaFuzz:

worker.onmessage = function (e) {
    var isPreview = e.data.tag;
    switch (e.data.type) {
        case "image":
            if (isPreview) {
                previewRenderers[e.data.filterId].
                    context.putImageData(
                        e.data.imageData, 0, 0);
            } else {
                mainRenderer.context.putImageData(
                    e.data.imageData, 0, 0);
            }

            break;
        // more code here
    }
};

The code should be fairly self-explanatory. It simply picks the image data object sent by the worker and applies it to the relevant canvas’s context object causing the modified image to be rendered on screen. Scheduling a filter for conversion with the worker is equally simple. Here’s the routine that performs this function in InstaFuzz:

function scheduleFilter(filterId,
                        renderer,
                        img, isPreview,
                        resetRender) {
    if (resetRender) {
        renderer.clearCanvas();
        renderer.renderImage(img);
    }

    var imageData = renderer.context.getImageData(
        0, 0,
        renderer.size.width,
        renderer.size.height);

    worker.postMessage({
        imageData: imageData,
        width: imageData.width,
        height: imageData.height,
        filterKey: filterId,
        tag: isPreview
    });
}

In conclusion

We saw that fairly intricate user experiences are possible today with HTML5 technologies such as Canvas, Drag/Drop, File API and Web Workers. Support for all of these technologies is quite good in pretty much all modern browsers. One thing that we did not address here is the question of making the app compatible with older browsers. That, truth be told, is a non-trivial but necessary task that I will hopefully be able to talk about in a future article.
Link Comment (2)
 
Building an Instagram clone – Part 1
Technobabble
4/17/2013 5:31:26 PM  

Introduction

When I started out on this app I was only really just interested in seeing if the web platform had really evolved to a point where an app like the hugely popular Instagram app could be built using just HTML, JavaScript and CSS. As it turns out we can in fact do exactly that. This article walks you through the technologies that make this possible and shows how it is entirely feasible today to build interoperable web applications that provide a great user experience no matter what brand of browser the user is running.

If you happen to be one of the two or so people who have not heard about Instagram then you might be pleased to hear that it is a hugely popular photo sharing and social networking service that allows you to take pictures, apply interesting digital filters on them and share them with the world to see. The service got so popular that it was acquired by Facebook for a bag full of cash and stock in April of 2012.

InstaFuzz is the name of the app I put together and while I don’t expect to be acquired by Facebook or anybody else for a billion green it does however make the case that an app such as this one can be built using only standards compliant web technologies such as Canvas, File API, Drag/Drop, Web Workers, ES5 and CSS3 and still manage to run well on modern browsers such as Internet Explorer 10, Google Chrome and Firefox.

About the app

If you’d like to take a look at the app, then here’s where it is hosted at:

http://blogorama.nerdworks.in/arbit/InstaFuzz/

You can download the source and run locally from here.  While this is a Visual Studio 2012 project there really isn’t any server code or anything like that.  You can use your favorite editor to look at the source and run it from the file system if you are so inclined.

As soon as you load it up, you’re presented with a screen that looks like this:

clip_image002

The idea is that you can load up a photograph into the app either by clicking on the big red “Add” button on the bottom left hand corner or drag and drop an image file into the blackish/blue area on the right. Once you do that you get something that looks like this:

clip_image003

You’ll note that a list of digital filters are listed on the left of the screen showing a preview of what the image would look like if you were to apply the said filter. Applying a filter is a simple matter of clicking on one of the filter previews on the left. Here’s what it looks like after applying the “Weighted Grayscale” filter followed by a “Motion Blur”. As you can tell filters are additive – as you keep clicking on filters, they are applied on top of what was applied earlier:

clip_image004

Let’s next take a look at how the UI layout has been put together.

UI Layout

The HTML markup is actually so little that I can actually reproduce the contents of the BODY tag in its entirety here (excluding the SCRIPT includes):

<header>
    <div id="title">InstaFuzz</div>
</header>
<section id="container">
    <canvas id="picture" width="650" height="565"></canvas>
    <div id="controls">
        <div id="filters-list"></div>
        <button id="loadImage">Add</button>
        <input type="file" id="fileUpload"
               style="display: none;"
               accept="image/gif, image/jpeg, image/png" />
    </div>
</section>

<!-- Handlebar template for a filter UI button -->
<script id="filter-template" type="text/x-handlebars-template">
    <div class="filter-container" data-filter-id="{{filterId}}">
        <div class="filter-name">{{filterName}}</div>
        <canvas class="filter-preview" width="128" height="128"></canvas>
    </div>
</script>

There’s nothing much going on here. Pretty much everything should be standard fare. I will however draw attention to the fact that I am using the Handlebars JavaScript templating system here for rendering the markup for the list of filters on the left of the screen. The template markup is declared in the HTML file (the SCRIPT tag in the snippet shown above) and then used from JavaScript. The template markup is then bound to a JavaScript object that supplies the values for handlebars expressions such as {{filterId}} and {{filterName}}. Here’s the relevant piece of JS from the app with a bit of DOM manipulation help from jQuery:

var templHtml = $("#filter-template").html(),
    template = Handlebars.compile(templHtml),
    filtersList = $("#filters-list");
var context = {
    filterName: filter.name,
    filterId: index
};

filtersList.append(template(context));

As you can tell from the HTML markup all the filter preview boxes feature a CANVAS tag as does the big box on the right where the final output is rendered. We’ll go into a bit more detail later on in the article as to how canvas technology is used to achieve these effects.

The app also uses CSS3 @font-face fonts to render the text in the header and the “Add” button. The fonts have been taken from the excellent Font Squirrel site and here’s what the declaration looks like:

@font-face {
    font-family: 'TizaRegular';
    src: url('fonts/tiza/tiza-webfont.eot');
    src: url('fonts/tiza/tiza-webfont.eot?#iefix')
           format('embedded-opentype'),
         url('fonts/tiza/tiza-webfont.woff') format('woff'),
         url('fonts/tiza/tiza-webfont.ttf') format('truetype'),
         url('fonts/tiza/tiza-webfont.svg#TizaRegular') format('svg');
    font-weight: normal;
    font-style: normal;
}

This directive causes the user agent to embed the font in the page and make it available under the name assigned to the font-family rule which in this case is “TizaRegular”. After this we can assign this font to any CSS font-family rule like how we normally do. In InstaFuzz I use the following rule to assign the font to the header element:

font-family: TizaRegular, Cambria, Cochin, Georgia, Times,
   "Times New Roman", serif;

You might also have noticed that there is a subtle shadow being dropped on the page by the container element.

clip_image001[4]

This is made possible using the CSS3 box-shadow rule and here’s how it’s used in InstaFuzz.

-moz-box-shadow: 1px 0px 4px #000000, -1px -1px 4px #000000;
-webkit-box-shadow: 1px 0px 4px #000000, -1px -1px 4px #000000;
box-shadow: 1px 0px 4px #000000, -1px -1px 4px #000000;

This causes the browser to render a shadow around the relevant element. Each comma separated section in the value specifies the following attributes of the shadow:

  1. Horizontal offset

  2. Vertical offset

  3. Spread distance – positive values have the effect of softening the shadow

  4. Shadow color

One can specify multiple shadow values separated by comma as in fact has been done above. Note that I’ve also specified the shadow using vendor prefix syntax for Firefox and Chrome/Safari using the moz and webkit prefixes. This causes the shadow to continue to work in versions of those browsers where support for this capability was provided using the vendor prefixed version of the rule. Note that the W3C version of the rule – box-shadow – is specified last. This is done deliberately to ensure that in case the browser supports both the forms then only the W3C behavior is actually applied to the page.

One often finds that web developers either fail to include vendor prefixed version of a given CSS3 rule for all the browsers that support that rule and/or fail to include the W3C version as well. Often developers just put the webkit version of the rule ignoring other browsers and the W3C standard version. This causes two problems – [1] poor user experience for users who are using non-webkit browsers and [2] it ends up resulting in webkit becoming a de-facto standard for the web. Ideally we want W3C to be driving the future of the web and not one specific browser implementation. So here are some things to remember when playing with experimental implementations of CSS features:

  1. Use vendor prefixed versions of CSS rules by all means but remember to specify the rule for all supported browsers and not just the one that you happen to be testing the page in (if you’re using Visual Studio to edit your CSS then you might be interested in the supremely excellent extension for Visual Studio called Web Essentials that makes the job of managing vendor prefixes about as simple as it can possibly get).

  2. Remember to specify the W3C version of the rule as well.

  3. Remember to order the occurrence of the rules so that the W3C version shows up last. This is to allow clients that support both the vendor prefixed version and the W3C version to use the W3C specified semantics for the rule.

That’s all for now.  In the next and final post in this series we’ll take a look at how the app supports drag/drop of files, the use of File API, how the filters themselves work and how we prevent the UI thread from freezing by delegating the core number crunching work to web workers.

Link Comment
 
blogorama home
about this blog
email the author
where on earth am i?
subscribe to mailing list
feeds Use these links for feed syndication
rss  |  atom
by category
technobabble (64)
philosophical crud (3)
irrelevant stuff (7)
archive
june, 2012 (1)
november, 2011 (2)
october, 2011 (1)
september, 2011 (7)
july, 2011 (3)
june, 2011 (2)
may, 2011 (3)
april, 2011 (1)
march, 2011 (1)
february, 2011 (1)
february, 2010 (1)
october, 2009 (1)
september, 2009 (1)
july, 2009 (5)
march, 2009 (2)
august, 2008 (2)
march, 2008 (1)
january, 2008 (1)
september, 2007 (2)
april, 2007 (1)
february, 2007 (2)
december, 2006 (1)
october, 2006 (1)
september, 2006 (4)
august, 2006 (3)
july, 2006 (4)
june, 2006 (3)
may, 2006 (6)
april, 2006 (2)
recent entries
Iterating over a st...
Playing in-memory a...
Add a "Web Server H...
Some notes on C++11...
Implementing variab...
Debugging existing...
Screen scraping wit...
Building an Instagr...
Building an Instagr...
640967 hits