h1

Free ebooks – ebook management

August 30, 2011

I just bought a new Amazon kindle which I am very happy with. I have been stalking the net over the last few days looking for free ebooks. I was going to construct a list of free eBooks sites, however, an awesome list already exists.

Calibre (http://calibre-ebook.com/) provides an excellent option for ebook management on MacOS or windows. There are also plug-ins to Calibre which can strip DRM from many ebooks. Legality of these plug-ins is some what shady depending on where you live!

 

 

h1

First Prezi presentation – Experiences at SC’09 – UoM HPC Forum

July 8, 2011

Earlier this year a colleague of mine showed me prezi. This short presentation is the first chance I had to use the software in front of an audience. The presentation is on my experiences at Super Computing 2009 and how it influenced my Ph.D. candidature.

The link is available here: http://prezi.com/0in4wemn-aad/hpc-forum/ (I would embed the presentation but wordpress does not allow flash).

h1

VLSCI news (my summer internship)

January 10, 2011

The Victorian Life Sciences Computation Initiative (VLSCI) is a Victorian state government-funded organisation providing grants of computer time for high performance life sciences computing.  I have been fortunate enough to win a summer research internship with the partner  IBM research collaboratory.

More soon.

h1

17th AFMC presentation

January 10, 2011

Attached is my presentation at the 17th Australasian Fluid Mechanics Conference (AFMC).

The presentation is here: butler_afmc.

The corresponding paper is:

Butler, C. J., Ryan, K. & Sheard, G. J. 2010 Shear gradients within an in–vitro thrombotic environment. In 17th Australasian Fluid Mechanics Conference, University of Auckland, Auckland, New Zealand.

h1

Charm/Charm++ programming language

December 28, 2010

I have recently started developing a numerical solver using the Charm++ package. Charm++ is designed to abstract parallel computing from the typical MPI paradigm. The programming language is based on asynchronous communication and the construction of multiple objects per processor core (‘chares’). The system implements dynamic load balancing via the migration of chares across processors.

The paradigm itself is efficient and powers some applications which scale to extreme levels (e.g. NAMD & OpenAtom) and has a number of very useful features (visualisation, profiling & debugging), however, it may be a mess to start with for the developer.

The manual on the charm++ website is a great start along with the examples within the charm source directory. Additional resources include the webcasts of tutorial lectures. However the language desperately needs a clear API to support development using charm++.

An API will make development easier especially when using associated libraries and their limitations (i.e. C++ STL support and Barrier support).

More to come as I get further along.

h1

mpi4py parallel IO example

September 23, 2010

For about 9 months I have  been running python jobs in parallel using mpi4py and NumPy. I had to write a new algorithm with MPI  so I decided to do the IO in parallel. Below is a small example of reading data in parallel. Mpi4py is lacking examples. It is not pretty, however, it does work.

import mpi4py.MPI as MPI
import numpy as np
class Particle_parallel():
    """ Particle_parallel - distributed reading of x-y-z coordinates.

    Designed to split the vectors as evenly as possible except for rounding
    ont the last processor.

    File format:
        32bit int :Data dimensions which should = 3
        32bit int :n_particles
        64bit float (n_particles) : x-coordinates
        64bit float (n_particles) : y-coordinates
        64bit float (n_particles) : z-coordinates
    """
    def __init__(self, file_name,comm):
        self.comm = comm
        self.rank = self.comm.Get_rank()
        self.size = self.comm.Get_size()
        self.data_type_size = 8
        self.mpi_file = MPI.File.Open(self.comm, file_name)
        self.data_dim = np.zeros(1, dtype = np.dtype('i4'))
        self.n_particles = np.zeros(1, dtype = np.dtype('i4'))
        self.file_name = file_name
        self.debug = True

    def info(self):
        """ Distrubute the required information for reading to all ranks.

        Every rank must run this funciton.
        Each machine needs data_dim and n_particles.
        """
        # get info on all machines
        self.mpi_file.Read_all([self.data_dim, MPI.INT])
        self.mpi_file.Read_all([self.n_particles, MPI.INT])
        self.data_start = self.mpi_file.Get_position()
    def read(self):
        """ Read data and return the processors part of the coordinates to:
            self.x_proc
            self.y_proc
            self.z_proc
        """
        assert self.data_dim != 0
        # First establish rank's vector sizes
        default_size = np.ceil(self.n_particles / self.size)
        # Rounding errors here should not be a problem unless
        # default size is very small
        end_size = self.n_particles - (default_size * (self.size - 1))
        assert end_size >= 1
        if (self.rank == (self.size - 1)):
            self.proc_vector_size = end_size
        else:
            self.proc_vector_size = default_size
        # Create individual processor pointers
        #
        x_start = int(self.data_start + self.rank * default_size *
                self.data_type_size)
        y_start = int(self.data_start + self.rank * default_size *
                self.data_type_size +  self.n_particles *
                self.data_type_size * 1)
        z_start = int(self.data_start + self.rank * default_size *
                self.data_type_size + self.n_particles *
                self.data_type_size * 2)
        self.x_proc = np.zeros(self.proc_vector_size)
        self.y_proc = np.zeros(self.proc_vector_size)
        self.z_proc = np.zeros(self.proc_vector_size)
        # Seek to x
        self.mpi_file.Seek(x_start)
        if self.debug:
            print 'MPI Read'
        self.mpi_file.Read([self.x_proc, MPI.DOUBLE])
        if self.rank:
            print 'MPI Read done'
        self.mpi_file.Seek(y_start)
        self.mpi_file.Read([self.y_proc, MPI.DOUBLE])
        self.mpi_file.Seek(z_start)
        self.mpi_file.Read([self.z_proc, MPI.DOUBLE])
        self.comm.Barrier()
        return self.x_proc, self.y_proc, self.z_proc
    def Close(self):
        self.mpi_file.Close()
h1

Compiling OpenFOAM 1.7.x for OS X 10.6

September 16, 2010

This is a short guide for installing the Developer version of OpenFOAM for Snow Leopard. I have tried to include all details.

What you need:

A mac with OS 10.6 and  approximately 10 GB of HD space.

Preliminary steps:

Install the OS X developer tools

Install GCC 4.3 , 4.4 or 4.5 from either Macports or Fink. (4.5 will only work with 1.7.x) and git. I will presume 4.5

Once this you may start building OpenFOAM. The Mac PS file system is not file sensitive by default. Therefore you need to make a case-sensitive disk image for OpenFOAM.

Open Disk Utility (/Applications/Utilities/Disk Utility)

Menu > File > New > Blank disk image ..

You may save the image wherever you want, however, name the image ‘OpenFOAM’.

Using the drop box change the format to ‘Mac OS Extended (Case sensitive)’

Change the size to at least 5 GB. you may increase this later if required.

Create the image and close Disk Utility.

To keep the installation nice and clean we are going to mount the image at $HOME/OpenFOAM

This is the default OpenFOAM install site which will make your life easier in the long run.

To do this add the following to your .bashrc file (if you don’t have one you will need to create one):

hdiutil attach "/path/to/your/disk_image.dmg" -mountpoint "$HOME/OpenFOAM" > /dev/null

This will mount the image when your first open the terminal from now on.
Also add the following which sources the OpenFOAM bash files. This will create errors until you download OpenFOAM.

. $HOME/OpenFOAM/OpenFOAM-1.7.x/etc/bashrc

After you have saved your .bashrc file open a new window in the terminal. You will get the following error:

-bash: /Users/yourusername/OpenFOAM/OpenFOAM-1.7.x/etc/bashrc: No such file or directory

Download the following files and move them to $Home/OpenFOAM;
The 1.7.1 Third party software pack .
The openFOAM 1.7.x patch by Bernhard Gschaider.

The third party patch Bernhard Gschaider.

Check the thread for any updates.

Move these files to $HOME/OpenFOAM. Now we need to edit OpenFOAM-1.7.x-Mac_v2.patch. Open up the file in a text editor and check that the versions of gcc / g++ match what you have installed.
If you installed from macports you will have gcc-mp-4.5 and g++-mp-4.5 Whereas from fink it is gcc-fsf-4.5 and g++-fsf-4.5. Search through the file for ‘-mp-‘ and make sure the version and distribution strings match what you have installed.

At the terminal execute the following:

cd $HOME/OpenFOAM
git clone git://github.com/OpenCFD/OpenFOAM-1.7.x.git
tar -xfz ThirdParty-1.7.1.gtgz
mv ThirdParty-1.7.1 ThirdParty-1.7.x
cd ./ThirdParty-1.7.x
patch -p1 <../ThirdParty-1.7-Mac.patch
cd ../OpenFOAM.1.7.x
patch -p1 <../OpenFOAM-1.7.x-Mac_v2.patch
. $HOME/OpenFOAM/OpenFOAM-1.7.x/etc/bashrc
./Allwmake

This should give you a working OpenFOAM distributions with a few exceptions:
foamToTec360 does not work
parafoam does not work. To address this do the following:
Download and install the Paraview application.
In your case directories ‘touch’ the foam file.
i.e. in a case called ‘isofoam_case’

touch isofoam_case.foam

Open this file with the binary install of Paraview.

Enjoy