The Mandelbrot-Set with Google Go (Part II)

>> Part I: The Mandelbrot-Set with Google Go (Part I)

So here’s the second part of my little field trip to Google’s new programming language “Go“.
This time I tried to use “channels” and “goroutines” to parallelize the pixel calculations across all CPU cores.

The main changes are that the parameters that are needed for the pixel calculation are now wrapped in a PointParams struct so they can easily be put into a channel.
Done that we’re instantiating 2 buffered channels so they can hold the PointParams for all pixels and the goroutines don’t block and possibly cause a deadlock.
In the next step the PointIteration() function is invoked as a goroutine. One time for each core. To do that we just have to prefix the function call with the keyword “go”. Easy, isn’t it?
Instead of directly calculating the pixels in the nested for loop we’re now just creating new PointParams and shove them into the “in” channel.
Meanwhile the running PointIteration() goroutines are fetching the PointParams from the “in” channel and when the calculation is done they’re sending a signal to the “out” channel.
All we have to do now is to wait for all calculations to finish. This is done by looping over the “out” channel and pulling all values out. The value itself does not matter so it is discarded.

As the documentation states the current implementation of the compiler does not automatically parallelize the code. Thus, if we want CPU parallelization we have to add a call to runtime.GOMAXPROCS(NCPU) in our main() function at first.

package main

import (
  "fmt"
  "os"
  "math"
  "image"
  "image/png"
  "bufio"
  "time"
  "flag"
  "runtime"
)

const NCPU = 4		// number of CPU cores

var pointX = flag.Float64("x", -2.0, "X coordinate of starting point of Mandelbrot or fix point for Julia (range: 2.0 to 2.0)")
var pointY = flag.Float64("y", -2.0, "Y coordinate of starting point of Mandelbrot or fix point for Julia (range: 2.0 to 2.0)")
var zoom = flag.Float64("z", 1.0, "Zoom level")
var julia = flag.Bool("julia", false, "Turn on Julia calculation")
var maxIter = flag.Int("maxIter", 51, "Max number of point iterations")

type PointParams struct {
  cx float64
  cy float64
  px int
  py int
}

func main() {
  runtime.GOMAXPROCS(NCPU)
  flag.Parse()

  fmt.Printf("X: %f\n", *pointX)
  fmt.Printf("Y: %f\n", *pointY)
  fmt.Printf("Zoom: %f\n", *zoom)
  fmt.Printf("Julia: %t\n", *julia)
  fmt.Printf("MaxIter: %d\n", *maxIter)

  start := time.Nanoseconds()
  img := CalculateImage(1000, 1000)
  end := time.Nanoseconds()
  fmt.Printf("Time: %d ms\n", (end - start) / 1000 / 1000) // ms
  WriteImage(img)
}

func CalculateImage(imgWidth int, imgHeight int) *image.NRGBA {
  img := image.NewNRGBA(imgWidth, imgHeight)
  minCx := -2.0
  minCy := -2.0
  if !*julia {
    minCx = *pointX
    minCy = *pointY
  }
  maxSquAbs := 4.0	// maximum square of the absolute value
  // calculate step widths
  stepX := math.Fabs(minCx - 2.0) / float64(imgWidth) / *zoom
  stepY := math.Fabs(minCy - 2.0) / float64(imgHeight) / *zoom
  cx := 0.0
  cy := 0.0

  // Create buffered in and out channels
  in := make(chan *PointParams, imgWidth*imgHeight)
  out := make(chan int, imgWidth*imgHeight)

  // start goroutine for each CPU core
  for i := 0; i < NCPU; i++ {
    go PointIteration(in, out, maxSquAbs, *maxIter, img)
  }

  for px := 0; px < imgWidth; px++ {
    cx = minCx + float64(px) * stepX

    for py := 0; py < imgHeight; py++ {
      cy = minCy + float64(py) * stepY

      // put params in channel
      in <- &PointParams {cx, cy, px, py}
    }
  }

  // drain channel and wait for all calculations to complete
  for i := 0; i < (imgWidth*imgHeight); i++ {
    <- out
  }

  return img
}

func PointIteration(in chan *PointParams, out chan int, maxSquAbs float64, maxIter int, img *image.NRGBA) {
  for {
    // get params from in channel
    pointParams := <- in

    // init vars
    squAbs := 0.0
    iter := 0
    x := 0.0
    y := 0.0
    if *julia {
      x = pointParams.cx
      y = pointParams.cy
      pointParams.cx = *pointX
      pointParams.cy = *pointY
    }

    for squAbs <= maxSquAbs && iter < maxIter {
      xt := (x * x) - (y * y) + pointParams.cx	// z2
      yt := (2.0 * x * y) + pointParams.cy		// z2
      //xt := x * (x*x - 3*y*y) + cx	// z3
      //yt := y * (3*x*x - y*y) + cy	// z3
      //xt := x * (x*x*x*x - 10*x*x*y*y + 5*y*y*y*y) + cx	// z5
      //yt := y * (5*x*x*x*x - 10*x*x*y*y + y*y*y*y) + cy	// z5
      x = xt
      y = yt
      iter++
      squAbs = (x * x) + (y * y)
    }

    color := ChooseColor(iter, maxIter)
    img.Set(pointParams.px, pointParams.py, color)

    out <- 1;	// signal that calculation is done
  }
}

func ChooseColor(iterValue int, maxIter int) *image.NRGBAColor {
  val := uint8(iterValue)
  if iterValue == maxIter {
    return &image.NRGBAColor {0, 0, 0, 255}
  }
  multi := uint8(255 / maxIter)
  return &image.NRGBAColor {0, val*multi, 0, 255}
  //return &image.NRGBAColor{^(val*5), ^(val*5), ^(val*5), 255}
}

func WriteImage(img *image.NRGBA) {
  file, err := os.Create("mandelbrot.png")
  if err != nil {
    fmt.Printf("Could not create file %s", file.Name())
  }
  writer := bufio.NewWriter(file)
  png.Encode(writer, img)
  writer.Flush()
  file.Close()
}

So far so good. But when I ran this code I was a bit irritated because it almost took twice as much time as the single threaded version.
After a while of wondering and pondering I decided to raise the max iterations so that the point calculations consume much more time compared to the default settings. And behold, it works. The parallelized version is much faster than the normal one.
With maxIter set to 40000 the single theaded version takes up to 46 seconds and the parallelized version takes about 11,5 seconds. That’s exactly the excpected ratio of 4:1. You can also see very nicely how all cores are being used.

So it seems that the overhead the channels and goroutines produce to provide thread-safety etc. is just too big when you just have fast calculations.

The produced image with that much iterations is of course pretty useless. In fact it’s entirely black due to my poor color choosing implementation.

Next up: Rust (well…maybe… :))

>> Part I: The Mandelbrot-Set with Google Go (Part I)

The Mandelbrot-Set with Google Go (Part I)

>> Part II: The Mandelbrot-Set with Google Go (Part II)

Time to learn a new programming language. This time my decision fell on a quite young member of the programming language family, namely Google Go.

The various promised features really made me a bit excited about this language, so I decided to make a simple implementation of the Mandelbrot-Set (with the option to also calculate Julia-Sets).

To keep it short – here’s the code:

package main

import (
 "fmt"
 "os"
 "math"
 "image"
 "image/png"
 "bufio"
 "time"
 "flag"
)

var pointX = flag.Float64("x", -2.0, "X coordinate of starting point of Mandelbrot or fix point for Julia (range: 2.0 to 2.0)")
var pointY = flag.Float64("y", -2.0, "Y coordinate of starting point of Mandelbrot or fix point for Julia (range: 2.0 to 2.0)")
var zoom = flag.Float64("z", 1.0, "Zoom level (only working properly for Mandelbrot)")
var julia = flag.Bool("julia", false, "Turn on Julia calculation")
var maxIter = flag.Int("maxIter", 51, "Max number of point iterations")
var imgSize = flag.Int("imgSize", 1000, "Size of the image")

func main() {
 flag.Parse()

 fmt.Printf("X: %f\n", *pointX)
 fmt.Printf("Y: %f\n", *pointY)
 fmt.Printf("Zoom: %f\n", *zoom)
 fmt.Printf("Julia: %t\n", *julia)
 fmt.Printf("MaxIter: %d\n", *maxIter)
 fmt.Printf("ImgSize: %d\n", *imgSize)

 start := time.Nanoseconds()
 img := CalculateImage(*imgSize, *imgSize)
 end := time.Nanoseconds()
 fmt.Printf("Time: %d ms\n", (end - start) / 1000 / 1000) // ms
 WriteImage(img)
}

func CalculateImage(imgWidth int, imgHeight int) *image.NRGBA {
 img := image.NewNRGBA(imgWidth, imgHeight)
 minCx := -2.0
 minCy := -2.0
 if !*julia {
 minCx = *pointX
 minCy = *pointY
 }
 maxSquAbs := 4.0 // maximum square of the absolute value
 // calculate step widths
 stepX := math.Fabs(minCx - 2.0) / float64(imgWidth) / *zoom
 stepY := math.Fabs(minCy - 2.0) / float64(imgHeight) / *zoom
 cx := 0.0
 cy := 0.0
 for px := 0; px < imgWidth; px++ {
 cx = minCx + float64(px) * stepX

 for py := 0; py < imgHeight; py++ {
 cy = minCy + float64(py) * stepY

 iterValue := PointIteration(cx, cy, maxSquAbs, *maxIter)

 color := ChooseColor(iterValue, *maxIter)
 img.Set(px, py, color)
 }
 }
 return img
}

func PointIteration(cx float64, cy float64, maxSquAbs float64, maxIter int) int {
 squAbs := 0.0
 iter := 0
 x := 0.0
 y := 0.0
 if *julia {
 x = cx
 y = cy
 cx = *pointX
 cy = *pointY
 }

 for squAbs <= maxSquAbs && iter < maxIter {
 xt := (x * x) - (y * y) + cx // z^2
 yt := (2.0 * x * y) + cy // z^2
 //xt := x * (x*x - 3*y*y) + cx // z^3
 //yt := y * (3*x*x - y*y) + cy // z^3
 //xt := x * (x*x*x*x - 10*x*x*y*y + 5*y*y*y*y) + cx // z^5
 //yt := y * (5*x*x*x*x - 10*x*x*y*y + y*y*y*y) + cy // z^5
 x = xt
 y = yt
 iter++
 squAbs = (x * x) + (y * y)
 }
 return iter;
}

func ChooseColor(iterValue int, maxIter int) *image.NRGBAColor {
 val := uint8(iterValue)
 if iterValue == maxIter {
 return &image.NRGBAColor {0, 0, 0, 255}
 }
 multi := uint8(255 / maxIter)
 return &image.NRGBAColor {0, val*multi, 0, 255}
 //return &image.NRGBAColor{^(val*multi), ^(val*multi), ^(val*multi), 255} // grey
}

func WriteImage(img *image.NRGBA) {
 file, err := os.Create("mandelbrot.png")
 if err != nil {
 fmt.Printf("Could not create file %s", file.Name())
 }
 writer := bufio.NewWriter(file)
 png.Encode(writer, img)
 writer.Flush()
 file.Close()
}

I have to admit i did not take much effort in cleaning up or optimizing the code. Mea culpa. The needed time for calculating the Mandelbrot-Set (1000×1000 px; 51 iterations) was about 600 ms on a Kubuntu VM with 2 cores@2,4GHz and 2GB RAM.

So this approach was pretty much straightforward and did a good job in getting my feet wet with Go. The next step now is to utilize some of the main features of Go: channels and goroutines for parallelization.

Finally here some pics:

>> Part II: The Mandelbrot-Set with Google Go (Part II)

Setting up a Git Server on Windows Server 2008 R2 (using msysgit and WinSSHD) [Update]

This is a follow-up post from this one. It is inspired by a comment from Dan Kendall. So thanks Dan.

In the portable version of msysgit the git-upload-pack.exe and the git-receive-pack.exe files are also located in the bin/ folder. This allows us to shrink the configuration down and even skip all configuration steps on the client.

The basic setup is the same as in the original post.

The server configuration is a bit different though.

1. Download the portable version of msysgit and extract it to C:\Git.

2. Add C:\Git\bin to the PATH variable.
(If you don’t want to use the portable msysgit for whatever reason, you also have to add C:\Git\libexec\git-core to the PATH)

3. Create a new file in C:\Git\bin and just insert

$*

Save it as shell script (.sh). E.g. “gitcmdhelper.sh”.
This basically executes the all parameters that are passed and solves the quotes-problem that is mentioned in the original post. There are surely other approaches to solve this issue and if you find a better one please do not hesitate to post a comment.

4. In WinSSHD open the advanced settings and edit the user account(s) you have set up before. Scroll down a bit and uncheck “Use group default exec request prefix”. Then change the “Exec request prefix” to

cmd.exe /c sh gitcmdhelper.sh 

Be sure to add a space at the end.

5. Done. You can connect to the server with your Git client.

git clone|push|pull|etc. ssh://username@server/path/to/myrepo 

Setting up a Git Server on Windows Server 2008 R2 (using msysgit and WinSSHD)

My first attempt to use a remote Git repository on my Windows Server was to set up a WebDAV site and connect it as network drive on my PC. In principle this works well and the setup is quite fast and easy. The only problem was that it was really slow on my Win 7 machine. Unworkable slow. Even with this magic Auto-detect proxy settings turned off it didn’t improve very much. On Win XP it worked quite well though.
So that was the one of the main reasons why I decided to set up a Git remote repo over SSH. Unfortunately there were some hurdles to take to get this to work.

Basic Setup:

1. Go download and install WinSSHD on your Server. There’s a free version for personal use.

2. Configure WinSSHD. Set up the account(s) you want to use etc. This should be pretty straightforward, as it’s all self-explaining.

3. Now you should be able to connect from a remote client using simple password authentication. For example in Git Bash you can try to connect by typing

ssh username@server

If the connection works. Type ‘exit’ to disconnect and return to the bash

4. Since we don’t want to use password auth the next step is to generate a ssh key pair.
(You can configure the allowed auth types for users/groups in the advanced settings of WinSSHD)

$ ssh-keygen -t rsa -C "Comment" Generating public/private rsa key pair. Enter file in which to save the key (/c/Users/shoehle/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /c/Users/shoehle/.ssh/id_rsa. Your public key has been saved in /c/Users/shoehle/.ssh/id_rsa.pub. The key fingerprint is:c5:ed:1b:d1:40:94:5b:a1:6b:d1:76:a1:46:8b:7b:fc Comment

The -t specifies the type. In this case RSA with a default length of 2048 bit.
And with the -C parameter you can add a comment.
The default file path should be fine so just press enter at the first prompt.
Then choose a good passphrase for your key.

5. Import the public key (id_rsa.pub) in WinSSHD to the user account you want to connect with

6. At this point you should be able to connect to your SSH server with the key. If you type ‘ssh username@server’ again you should be able to connect with the key + passphrase.

Ok so these are the basic steps to connect with git to your sever through SSH.
Also if you don’t want to re-enter the passphrase everytime you connect, have a look at this help article from Github

If you now try to clone/push/pull/etc. from the server you will very likely get some errors.
Something like “git-upload-pack: command not found” or “myrepo.git does not appear to be a git repository” or similar.
To fix the first error you can specify the path to the upload/receive pack on the server with some additional parameters. But more on this later.
The second error is due to the command that msysgit tries to execute on the shell. It has single quotes instead of double(or no) quotes that enclose the path to the repository and therefore Windows can’t execute it properly and throws an error. To fix this we use sh.exe from msys.

[Update 05.08.2011]: Here’s a follow-up post that shows an easier way to set up the server. So you can skip the below sections.

Making Git work on the server:

1. If not already done, install msysgit on your server (I would recommend to install it directly to C:\Git or at least a path that has no spaces because I had some weird issues with “spaced” paths)

2. Add C:\Git\bin to the PATH variable. This is very important!! sh.exe and other dependencies are in this folder

3. Now go to C:\Git\bin and add the following two files

gup.sh grp.sh

4. Open gup.sh in your favorite editor and insert

C:/Git/libexec/git-core/git-upload-pack.exe $*

5. Open grp.sh and insert

C:/Git/libexec/git-core/git-receive-pack.exe $*

The $* essentially rips off the single quotes from the repository path argument, so a path that has spaces in it won’t work here either I guess

Basically we’re done now and all git operations from the client should work.
For a clone you have to type

git clone -u 'sh gup.sh' ssh://username@server/path/to/myrepo.git

or a push would be

git push --exec 'sh grp.sh' ssh://username@server/path/to/myrepo.git

but that is not very elegant.

Cleaning things up:

1. At first we want to get rid of the whole repo path by specifying a remote alias

git remote add origin ssh://username@server/path/to/myrepo.git

Where “origin” would be the alias name

2. Next we set the config for the git-upload-pack and git-receive-pack so we don’t have to reference the shell script all the time.

git config remote.origin.uploadpack 'sh gup.sh'

and

git config remote.origin.receivepack 'sh grp.sh'

That’s it. Now we can use the normal git commands without any additional parameters:

git clone origin git push origin master git pull origin ...

Hurray!

Alternatively I could have gone with Cygwin + OpenSSH of course and there are some tutorials out there on how to set this up, but I don’t want to have the unnecessary Cygwin bloat on the server unless I really need to. Also these Cygwin tutorials seemed to be much longer than the approach I took.

Uploading Files using AJAX

Normally, if you want to to upload files through a website you would use a <form> element with enctype=”multipart/form-data” that posts the file from a nested file-<input> element to the specified action (in ASP.NET MVC you would receive that file on the server side by means of a HttpPostedFileBase variable as argument of your action method).

But if you don’t have or don’t want to use a <form> for some reason you have to find other ways to upload a file.

The obvious way would be to assemble and submit a invisible form with Javascript when the user triggers the upload event. To make this work properly you would also have to create an <iframe> as “target” for the <form> that receives the response. The advantage of this method is that it works with all browsers.

Another approach would be to use AJAX, that is using an XMLHttpRequest object. This is also very simple and done in a few lines of Javascript code:

var xhr = new XMLHttpRequest();
xhr.open("POST", url, true); // method, url, async
xhr.setRequestHeader("Cache-Control", "no-cache");
xhr.setRequestHeader("X-File-Name", file.fileName);
xhr.setRequestHeader("X-File-Size", file.fileSize);
xhr.setRequestHeader("Content-Type", "multipart/form-data");
xhr.send(file);

In the above code the “url” would be the “Controller/Action” to which the Request is posted to and the “file” can be extracted from any file-<input> element.

The “X-File-Name” and “X-File-Size” are just self-chosen optional parameters that will be appended to the RequestHeader. That way you can send additional data with the Request.

Again, the “Content-Type” has the be “multipart/form-data” as we’re sending a file/binary data.

The “Cache-Control” is set to “no-cache”, to bypass server cache, so it cannot respond with a “304 Not Modified” response header and skip sending the data.

Finally, we have to subscribe to the “onreadystatechange” event in order to receive the Response from the server:

...
xhr.onreadystatechange = function() {
    if (xhr.readyState == 4) {      // completed
        if (xhr.status == 200) {    // OK
            var response = xhr.responseText;
            // do something
        }
    }
}
...

This solution should work quite well. But if you are using jQuery I would recommend you to use its .ajax() function instead. jQuery does several optimizations for different browsers and also takes over error handling.

$.ajax({
    beforeSend: function (xhr) {
        xhr.setRequestHeader("Cache-Control", "no-cache");
        xhr.setRequestHeader("X-File-Name", file.fileName);
        xhr.setRequestHeader("X-File-Size", file.fileSize);
        xhr.setRequestHeader("Content-Type", "multipart/form-data");
    },
    type: 'POST',
    url: url,
    processData: false,
    data: file,
    success: function (data, textStatus, xhr) {
        // do something
    },
    error: function (xhr, textStatus, errorThrown) {
        // do something
    }
});

We’re modifying the XMLHttpRequest to be sent in the “beforeSend” callback and set all needed options.
It’s important to set “processData” to “false”. This will prevent jQuery to transform the data to the default content type “application/x-www-form-urlencoded”.

Server Side:

A server side action method could then look like this:

[HttpPost]
public ActionResult UploadFile()
{
    bool success = false;
    string message = String.Empty;

    // Read custom attributes
    string fileName = Request["HTTP_X_FILE_NAME"];  // notice the "HTTP_" prefix
    string fileSize = Request["HTTP_X_FILE_SIZE"];

    try
    {
        // Read input stream from request
        byte[] buffer = new byte[Request.InputStream.Length];
        int offset = 0;
        int cnt = 0;
        while ((cnt = Request.InputStream.Read(buffer, offset, 10)) > 0)
        {
            offset += cnt;
        }
        // Save file
        using (FileStream fs = new FileStream(@"C:\Path\To\Save\File\" + fileName, FileMode.Create))
        {
            fs.Write(buffer, 0, buffer.Length);
            fs.Flush();
        }
        // Or in one line
        // System.IO.File.WriteAllBytes(@"Path\To\Save\File\" + fileName, buffer);

        success = true;
        message = "Success...";
    }
    catch (Exception)
    {
        success = false;
        message = "Error...";
    }

    // Create JSON Response
    var jsonData = new
    {
        success = success,
        message = message
    };

    return Json(jsonData);
}

Making global resources public

Regarding the previous post you may want to use strings from a global resources .resx file (App_GlobalResources) as error messages or descriptions in your attributes. This won’t work because global resource files are by default marked as internal and so you can’t access them from your controller or your model. Also the little “Access Modifier” dropdown menu in the resource editor is grayed out, so you can’t change the access level without further ado.

However there’s a way to change the access rights without editing the designer file by hand.

You just have to open the file properties of the resource file and change the “Custom Tool” from “GlobalResourceProxyGenerator” to “PublicResXFileCodeGenerator”, which is the default Tool for local resource files. Next you have to change the “Build Action” to “Embedded Resource”. You may also want to assign a proper Custom Tool Namespace like “Resources” in order to access the file properly, but this isn’t necessary.

Now rebuild the project and you should notice that the resource access is now public and you can access its contents from anywhere in your project.

E.g. like that:


[Required(ErrorMessageResourceType = typeof(Resources.Resource1), ErrorMessageResourceName = "Comment_Text")]
public string Myproperty { get; set; }

Localizing DisplayNameAttribute

When trying to put a localized string from a resource file (.resx) into the DisplayName Attribute you will get a the following build error:

An attribute argument must be a constant expression, typeof expression or array creation expression of an attribute parameter type

An easy way to solve this is to create your own custom DisplayName Attribute. To do that we will create a new class DisplayNameLocalizedAttribute and derive it from the original DisplayNameAttribute and then override the base DisplayName property to return the resource object we want.


public class DisplayNameLocalizedAttribute : DisplayNameAttribute
{
    private readonly string m_ResourceName;
    private readonly string m_ClassName;
    public DisplayNameLocalizedAttribute(string className, string resourceName)
     {
         m_ResourceName = resourceName;
         m_ClassName = className;
     }

     public override string DisplayName
     {
         get
         {
             // get and return the resource object
             return HttpContext.GetGlobalResourceObject(
                    m_ClassName,
                    m_ResourceName,
                    Thread.CurrentThread.CurrentCulture).ToString();
         }
     }
}

Use:


    [DisplayNameLocalized("MyResource", "MyString")]
    public string MyProperty { get; set; }

As you can see, we are now able to pass in a resource name (className) and a resource key (resourceName) and will get the correct string returned. The same procedure should work with similar Attributes such as the DescriptionAttribute.

Update 1: Another way would be by using reflection.

public class DisplayNameLocalizedAttribute : DisplayNameAttribute
{
    private string _displayName;

    public DisplayNameLocalizedAttribute(string resourceKey, Type resourceType)
    {
        PropertyInfo propInfo = resourceType.GetProperty(resourceKey,
                System.Reflection.BindingFlags.Public | System.Reflection.BindingFlags.Static);
        _displayName = (string)propInfo.GetValue(propInfo.DeclaringType, null);
    }

    public override string DisplayName
    {
        get
        {
            return _displayName;
        }
    }
}

Use:

[DisplayNameLocalized("MyString", typeof(MyResource))]
public string MyProperty { get; set; }

Update 2: In .NET 4.0 this procedure seems to be obsolete as you can just use the new [Display]-Attribute and specify the “ResourceType” and “Name” Properties:

[Display(ResourceType = typeof(MyResource), Name = "MyString")]
public string MyProperty { get; set; }
%d bloggers like this: