Text

Full Disclosure: the publisher provided me with a free copy of the book.

Redmine is a fantastic free open source project management system. It’s been a while since I’ve had the pleasure of using it, but I started with it several years ago while working for the university I was attending at the time.

I had researched several tools and Trac was one I had liked quite a bit previously over the other project management options. The only con at the time was that Trac only supported only single projects without monkeying around. This sucked. Luckily I found Redmine that provided everything I loved about Trac (and more) and also offered multi-project and hierarchical project management.

I thought it would be cool to have a chat client in the system for the team so I stumbled through building a chat plugin over the course of a month. It wasn’t exceptionally hard, but it was a journey. I’ve loved the system ever since, but unfortunately haven’t had much time in it do to using different systems professionally. Although I did get the favorable experience of setting it up again to play with some code from the book. It really is a fantastic product, and I think it’s much better than most paid offerings. I can’t recommend it enough.

Redmine has rich support for plugins and rails is really an amazing framework for extension. Having a book to accompany the process is a nice complement to the journey of developing a plugin. I was somewhat excited to see a resource like this appear. What follows is a review of the book itself.

Redmine Plugin Extension and Development
ASIN: B00J4LO3Q8
http://www.amazon.com/Redmine-Plugin-Extension-Development-Bevilacqua-ebook/dp/B00J4LO3Q8

http://www.packtpub.com/redmine-plugin-extension-and-development/book

Quick Summary

This book highlights the authors perspective on building a plugin from his experiences building the Knowlegebase plugin for Redmine. It covers he basics as well as some of the interesting aspects you’ll likely need like hooking into Redmine itself and working with permissions along with several other aspects you may or may not need (e.g. file attachments, integrating with the activity stream, modifying views, etc.). The end of the book wraps up with a quick overview of writing tests, although it’s more overview than something as comprehensive as the other parts of the book.

Who this book works well for

This book is well written for most beginning audiences. It doesn’t have extra fluff and get’s straight to the point. It’s concise almost like a cookbook, but still reads well if you like casual perusal.  Basically anyone wanting to build a Redmine plugin will probably get something out of it.

What This Book Isn’t

This is not a book to take a beginner to Redmine and Ruby into developing Redmine plugins without some effort. You won’t get by very easily if you don’t have a basic knowledge of Ruby and preferably of Rails. You will likely have to pull information from other sources or stumble around yourself a bit. That said it’s something manageable as the concepts and ideas underlying everything are pretty easy to figure out.

If you are looking for a book in this niche of development and/or are developing a new Redmine plugin this is a great one to have at hand. It’ll save a bunch of time and effort sieving various documentation especially for some of the nuances you run into with normal plugin development.

Text

By the end of this post you will be able to figure out where IL Code is in memory (memory address and hex opcodes), have a basic understanding of WinDBG, and understand some mechanics of the CLR, JIT, and App Domains.

My goal is to write a dynamic code injector for .NET. In this aim I decided to start with learning WinDBG, and using it to explore the inner machinery of an app domain. I’ve completed my first exercise of finding a method using the debugger and printing out the ILCode (and thus discovering all of the addresses and metadata along the way. ) This article is a guide through that experience so that you may have it for yourself.

Why WinDBG?

If you have ever stumbled into WinDBG before, it looks rather ugly and cryptic. Something someone from the days of WindowsNT or 95 might have used. However, even with my brief flirtation with the software I have come to already appreciate it’s power. I was surprised to learn in my rersearch that many people at Microsoft actually favor WinDBG over Visual Studio for much of their debugging. I believe this comes from the power the debugger has.

As off-putting as it might be at first, there is also something soothing about using a console to debug and explore. WinDBG was the easiest path I found to be able to explore things at a level beyond Visual Studios default debugger, and most of the JIT/.NET guides seemed to use it 

1 - Getting WinDBG and a simple Sample Application

Getting WinDBG is pretty much just a google search. I used the Windows SDK download.

I started with a simple console application. Out of the box of course I had a Main Method. I added two more methods [Foo() and Bar()] with a call only to Foo. Foo prints a pretty a “hello world” like message. My goal was to use injection to eventually dynamically swap the JIT calls to Foo with calls to Bar, however for this exercise I just wanted to explore. Below you can find my simple application I was operating on with WinDBG:

namespace FooBar
{
    class Program
    {
        static void Main(string[] args)
        {
            var program = new Program();
            Console.ReadKey();
            program.Foo();
            Console.ReadKey();
        }

        public int Foo()
        {
            int y = 15;
            Console.WriteLine("Foo Called!");
            return y;
        }

        public int Bar()
        {
            int x = 27;
            Console.WriteLine("Bar Called!");
            return x;
        }
    }
}

My goal was to be able find Foo() using WinDBG.

2 - SOS and WinDBG

 WinDBG has an extension called SOS. WinDBG by itself is a great debugger for unmanaged code. You can do all of the usual debug stuff. It get’s a lot more challenging when you want to debug managed code. In order to help you out with managed code, the Son of Strike extension is used to fill the gap providing manged code debugging help that lets you view IL Code, see if things have been JITed or not, set managed breakpoints, and more. 

As an interesting aside on the naming of SOS and COR for that matter which pops up in the names of many of the DLLS such as mscorwks.dll; Chris Schmich on stackoverflow provided some insight:

Jason Zander’s blog post explains it perfectly:

The original name of the CLR team (chosen by team founder and former Microsoft Distinguished Engineer Mike Toutonghi) was “Lighting”. Larry Sullivan’s dev team created an ntsd extension dll to help facilitate the bootstrapping of v1.0. We called it strike.dll (get it? “Lightning Strike”? yeah, I know, ba’dump bum). PSS really needed this in order to give us information back to the team when it was time to debug nasty stress failures, which are almost always done with the Windows debugger stack. But we didn’t want to hand out our full strike.dll, because it contained some “dangerous” commands that if you really didn’t have our source code could cause you confusion and pain (even to other Microsoft teams). So I pushed the team to create “Son of Strike” (Simon from our dev takes credit/blame for this), and we shipped it with the product starting with Everett (aka V1.1).

Also, I had heard of the CLR being referred to as “COM+ 2.0” before, but apparently it’s had a few names in its time (from here):

The CLR runtime lives in a DLL called MSCOREE.DLL, which stands for Microsoft Common Object Runtime Execution Engine. “Common Object Runtime,” or COR, is one of the many names this technology has had during its lifetime. Others include Next Generation Windows Services (NGWS), the Universal Runtime (URT), Lightning, COM+, and COM+ 2.0

Loading SOS

Loading SOS is done by a command:

.loadby sos clr

This actually failed for me the first time I tried it. After trying a bunch of things I just about gave up assuming WinDBG was just a pain in the ass to work with, then as a last resort I decided to verify the process was x64. I quickly discovered the problem. I was running WinDBG x64 , but running a x32 application. I assumed I was running x64 on my application since I have a x64 laptop, however console applications are 32 bit. After switching to WinDBG 32 bit, the SOS extension loaded without issue.

Attaching, Setting Symbol Paths, and Source File Paths

So after I had this all setup I realized I didn’t have any symbols loaded. Symbols are found in program database files (.PDB). These files can contain different levels of information that can allow more granular debugging. The PDB file for my program was generated with the program FooBar.exe. as FooBar.pdb in the same output directory. In order for symbols to be found for my program and all of the Microsoft ones too I had to set them up.

First I went to the File menu in WinDBG and set the Source File Path to the bin directory of my created application. I’ll need to investigate later exactly what this setting is for. It may be what actually finds the PDB file for my application

Second and probably more importantly I set my Symbol Path to the following:

SRV*C:\Users\Josh\Google Drive\Playground\WinDbgMem\FooBar\ConsoleApplication1\bin\Debug*http://msdl.microsoft.com/download/symbols

Some people recommended defining a general symbol path to store all symbols locally. I found with the above setting the Microsoft Symbols for the .NET Framework DLLs I was using were getting loaded and cached into the bin directory of my application. Notice the Microsoft symbol server line at the end will ensure we can pull any available symbols we don’t have from the Microsoft symbol servers.

At this point I am basically ready to attach. I goto the File menu and click “Attach to Process”. I find FooBar.exe in the list and attach. At this point the debugger attaches itself and gives me a console.

I type the SOS load command:

.loadby sos clr

and get back a blank line indicating it probably succeeded. The .chain call allows me to see loaded extensions to WinDBG. I typed it out:

.chain

And get back:

image

Notice that I typed the command wrong the first time and got back an error. Then I typed it correctly (.loadby instead .load) and it loaded. The .chain command now shows SOS in the extensions list. We can now use SOS.

Finally setup is complete and we can start looking at some cool stuff.

Modules

Modules are basically the generic term to refer to the unmanged and managed versions of what library or executable projects are. The first thing to start with when looking at a program in WinDBG is to view the modules. For this we use the list modules command:

lm

image

Notice that the symbols are only loaded for a couple of these. Being the eager bastard I am I’d like to see symbols loaded for all of these. So I am going to go ahead and load the assemblies and force the symbols to be loaded by using the reload command:

.reload /f

Now we check the loaded modules using lm again:

image

Notice that now deferred has been replaced with pdb symbols, and that even the FooBar.exe has symbols!

Starting Down the Rabbit Hole

Let’s start at the top and look at the Domains. When the CLR loads a domain three domains are actually initialized. Two of these are automatic and are not known even by the host. The other is the domain your application actually runs in. The first two are created as part of bootstrapping by mscoree.dll and mscorwks.dll, or if you have multiple processors the latter may be mscorsvr.dll [1].

The system domain sets up the shared domain and the default application domain and loads mscorlib.dll into the shared domain. Remember from the aside earlier that “cor” in these library names refers to the Common Object Runtime, a synonym for the Common Language Runtime (CLR).  The system domain also handles string interning and setting up/tearing down app domains [1].

The shared domain is where common code is loaded.  User applications can be loaded in this domain if they are loaded as domain neutral. ASP.NET apparently is supposed to do this by default, but I have yet to see it in practice. However, there are some nuances around “binding closures” though that may be preventing applications I’ve worked on from taking advantage of it.

The default domain is where my application will run!

Extensions to WinDBG follow the convention of using the exclamation mark to denote an extension command. The first one we will use is:

!dumpdomain

image

Notice that we see the three domains we were just talking about, and we even have some sexy looking memory addresses. The domain I am interested in here is “Domain 1” since it holds my application. I can use that memory address next to my “FooBar.exe” module to get some more information about it using the !dumpmodule command (-mt wil provide some more information on the types):

!dumpmodule -mt 00ed2ed4

image

So now I know where the method table for FooBar.Program is! Let’s take a look at it using !dumpmt -md (md of course adds more detail):

image

Notice we have some information here about if the method was precompiled (PreJIT), already has been compiled as part of Just In time compilation (JIT), or has yet to be compiled and still contains a JIT stub for compilation (NONE). This is pretty cool!

The second column is the address of the Method Descriptor. We can now find the IL for the method using the !dumpil command and passing the Method Descriptor address: !dumpil 00ad379c

image

That looks the IL code for the main function! So this is pretty neat, but it would be even cooler if we could see the code for a function directly in memory. Let’s grab Foo’s IL though since we that integer in there, a string, and a call.

image

As an aside the nop are often used to have an address for breakpoints when they are set on brackets in the source code.

Notice the ilAddr line. Let’s open a Memory window (View Menu > Memory) and then in the address field let’s type that iladdr number:

Now if you look at the IL Code and then reference it against the opcodes for the CIL instructions you can start to see where the IL is in memory. Below are small color divets indicated some of the first mappings. I stopped at the string because I was tired and unsure how to interpet it. I’m guessing the string “Foo Called!” is stored at a memory location and that what follows what I have so far is a memory address.

image

So with that we have successfully found the IL code in memory. Hopefully next time we will start to able to tweak and manipulate some of these values.

Using Wikipedias CIL Page against the above opcodes:

  • 00 - nop - no operation, often used as placeholder for breakpoints on brackets in IL/C#/.NET.
  • 1F - ldc.i4.s <int8 (num)> - push int32 number onto stack as short
  • 0F - in this context is just the number “15”
  • 0A - stloc.0 - pops value from stack to local var 0
  • 72 - ldstr - loads a string
  • Probably address to string
  • … and so on

[1] - http://msdn.microsoft.com/en-us/magazine/cc163791.aspx

Link

Why Functional Programming Hasn't Won Out Over Imperative Programming

Response by Eric Lippert. This man has such a talent for eloquence.

Link

Epic Project Combining Vision Processing, Robotics, and F#

Full tutorial and walk through along with source.

Text

For the past year on the side in my free time I have been working on building what will eventually be a small Villa at my parents house. This project is an exploration of carpentry and all that goes into building a house. A few pictures follow:

Into this project I plan on doing something interesting. I thought it would be interesting to be able to control the house from my phone. I was originally going to use a Rasberry PI board to accomplish this, but due to availability issues I ended up using a Arduino Uno R3 instead:

The Arduino will be the “computer” driving the house. It will talk to the outside world, and decide what circuits should be turned on and off. Unfortunately I can’t just jam 10 amps of 110AC into this bored without potentially starting a  myriad of forest fires. In order to control the circuits I need a way to control larger AC voltages with smaller DC voltages.

Relays are a great way to accomplish this. At first I was looking at mechanical relays, but these had two issues. One is that they are noisy. I don’t want to hear clicking noises every time a circuit turns off and on. The other problem is the ones I looked at were on at low voltage. This is undesirable if the system fails in some way as it would mean lights and other services would be stuck on.

To address this I ended up going with an 8-channel solid state relay. These are quiet and the particular model I looked at was off at low voltage. You can see it working below:

I have a temperature/humidity sensor on the way. This will provide me with data to be able to figure out whether heating or cooling circuits should be activated. I also have an Ethernet shield on the way which will allow me to plug this board into my home network.

Communicating with my Phone

The idea is that I can forward and route a public port on my router directly to the UNO. I can then develop a phone application to communicate on this public port. With this system I should be able to turn lights on and off, and adjust temperatures all from my phone, while sitting on the couch.

Wall Mounted Color Touch Screen

I also plan on eventually using a color touch screen that will sit in the wall, and allow users to set temperature schedules and turn circuits on and off. (http://imall.iteadstudio.com/im120417020.html) The TFT touch screen I am looking at has a physical mounting issue, but this is easily alleviated with pin header extenders (http://www.amazon.com/gp/product/B004G56J8W).

I was hoping to spend more time working close to the hardware on firmware, but the ADK is too convenient. I’m still trying to figure out a good reason to do some kernel work.

Tomato Potato,

Josh

Link
Text

I was having a discussion about a week ago with some other students, and a question came up in class. “Why don’t we just use a static class?

The professor replied: Static class?

I elaborated on the original question: In C# we are able to use a static class to create a non-instantiable class with only static methods. Java has static methods. Why not static classes like C#?

The professor, a little puzzled, replied: A standard idiom is just to have a private constructor and only have public static methods.

He then moved on, paying little more attention to the question, sufficiently satisfied that the discussion itself was addressed.

Why doesn’t Java have static classes?

Well to answer this question we need to have a bit more enlightenment about what a static class is in C#.

The .NET family of languages compile down to a byte-code coined IL Code. It is like an assembly language with object oriented support, that eventually get’s compiled and cached on the fly into native assembly. There are numerous benefits to doing things this way, but that’s outside the scope of this article.

So when we write C#, the C# compiler isn’t converting our code to native assembly, it is converting it to IL Code.

Lets look at what a static class looks like in IL Code

C# Secksy Time

Notice that ILCode the static class gets compiled into 

IL Code Sexy Time

There’s a couple things to notice. First is that there is no static modifier. The second is that there is now an abstract keyword and a sealed keyword.

An abstract modifier means that we can’t instantiate the class. A sealed modifier means we can’t inherit from the class. So a static class is, glossing over a couple small details, a non-instantiable class that cannot be inherited from.

It’s a hop and a skip over a small but dangerous creek, to realize that in Java we could probably get away with the same functionality by using an abstract modifier with a final modifer or what may be the more standard idiom of making a private constructor on a normal class, with a final modifier in the class declaration.

static class could be loosely translated into Java as abstract final class

Your self appointed ruler of the world,

Josh

Text

Why Build a Boot-Loader?

I’ve been needing something outside of what I do as a job to invest my more eccentric energies. So I have started moonlighting on the side of operating systems and embedded systems. My first self-proposed project was to learn what a boot-loader was, how it worked, and how to build one. After encountering GPT partitions in my research I was also curious where something like GPT fits in. GPT is something you might have run into as a requirement when you try to format a hard-drive more than 2 TB in size, as MBR only supports up to 2 TB of space.. Having achieved the goals and answered the questions listed above I am now writing about it

How Easy is it to Build (and Test) A Boot-Loader?

It was actually pretty easy. There are many guides out there that made it quite enjoyable. I am even more excited to say that I tested the boot-loader with a very low level, but easy approach that doesn’t seem to have been done before, that is from the light research I did when I was trying to achieve it. This method basically consists of writing the boot-loader directly into a VirtualBox HardDisk by hand.

Boot-loaders themselves are actually quite simple to build. As long as you follow a few rules you have a valid bootloader.

  • Must 512 Bytes in Size
  • Must have a magic number signature at the end of the 512 Bytes  0xAA55h or 0x55AAh depending on weather your system is little endian or big endian.
  • Must be compiled as 16 bit code if you are using an assembler.

The processor actually starts in 16 bit mode and is in 16 bit mode when your boot-loader code is called. Somewhere in your code you actually have to switch it to 32 bit in order to take advantage of 32 bit functionally. The terminology for these two states is real mode and protected mode.

When compiling with GAS (the Gnu assembler) I had problems. I couldn’t get my generated binary/HEX to have the correct ending signature of 0xAA55h. I switched over to NASM and everything seemed to work well.

How Do I Test the Boot-Loader?

I have a net-book, and I don’t have any spare USB drives. This means I couldn’t write my boot-loader to an actual piece of hardware. I also didn’t want to break my computer. These constraints meant I needed to use a virtual machine, and either create a floppy disk image or write the boot-loader directly to disk.

I really liked the idea of writing my boot-loader directly onto the VDI by hand. It seemed like this might help solidify how things work and where things are actually on the disk in my mind. It’s still abstracted when you use a tool to write a boot-loader to the disk, I mean who knows what all that tool is doing? When you do it by hand though, you know exactly what is on the physical media, and what needs to happen to make a boot-loader work.

There aren’t any guides that seem to tackle this problem in this way so I had to do a little bit of investigation and discovery. 

Looking at the VDI specs I found that it has a 512 Byte header. I figured the disk image is supposed to start right after that. I found, however, when writing boot-loader into this address space that it didn’t work.

I tried a couple more times failing, and then decided to just launch a LiveCD iso image I downloaded and run cfdisk on the partition.

Cfdisk is the graphical version of fdisk, and basically lets you easily create partitions. It also lets you mark them as bootable. Marking them as bootable adds that AA 55 marker I was talking about earlier. So I just searched for this marker in the VDI file (after running cfdisk above), and low and behold I found a basic MBR with some partition information. I simply went to the offset 512 bytes prior to the marker, turned on insert mode, and pasted my assembled boot-loader from one window to the other. To my astonishment it worked!

For my environment I used the Linux distrobution JolliOS, which is ubuntu based.

Tools used:

  • NASM
  • Bless Hex Editor
  • VirtualBox

I also read a tool called bochs, a PC-Emulator, works well for testing boot-loaders. I toyed with it a little, but after only a little trouble decided to pursue the virtualized approach instead.

Next Steps?

My next objective is to get the processor into 32-bit mode and load in my own kernel.

There is likely more work involved than I realize, but once that is working I’d like to build the Linux kernel from source and then have my boot-loader load the Linux kernel.

Text

Weight ROI towards Quality

In business there seems to be a tendency to migrate torwards asking the question “How can we deliver a decent product more quickly with the same resources?” Of course with the assumption that more product means more value, and more ROI.

I think the question should instead be “How can we deliver a quality product without the extra fluff  and in the most productive way possible?” We need to look more at the ROI of quality. We have to be careful with this though. These are dangerous waters.

This goal implied by this question doesn’t mean spending more time on over-engineering, excessive design, or slowing development. This goal implies taking time to develop existing code well, paying down technical debt, and loosening the expectations in length of development; but what does this really say? Basically  I am making the point that management should be desiring a Quality product and ensuring that plenty of time is allowed for the product to be of certain level of quality. Management should not be squeezing deadlines or making teams feel pressured to deliver faster, they should be encouraging them to develop a product thoroughly. In return developers need to be more efficient and avoid wasting excessive time on quality that provides little value (Pre-optimization, over-design, beautifying code, etc.). 

Avoid Burning Your Creative Process On Improvements With Small Returns

Often matured developers tend to want to spend significant amounts of time on optimizations, generalizations, and design under the warrant of craftsmanship, pride, and aspirations of creating a perfect product. These aspirations are noble, but when these aspirations are engaged prior to a demonstrated need they are short-sighted and lacking in wisdom. Creating a perfect product is impossible. You can’t build perfection on the first go around no matter how much you decouple, open-close, or compose. We need to stop trying to achieve perfection as developers. There is a cost to perfection, it is high, and most importantly perfection is short-lived in a world of changing needs. 

Remember Why You Do What You Do

We are in the business of providing value. This value is manifested in two forms. One form is of course the stakeholder or what is sometimes called the client. Producing value for the client means producing something they can use to achieve their goals. Our code also provides value both to and via future developers. This future developer may very well be you six months down the road. Well designed extensible software is wonderful. It makes everyone’s lives easier. The value here though is an indirect return, and often diminishes as requirements shift. You can make the code as shiny and pretty as you want, but at a certain point you are providing very little value at the cost of massive amounts of time.  There is a diminishing return to your coding brethren.  The direct value you provide is in providing a functioning easy to use product for the stakeholder; always remember this value is most of your value. I believe you should get to that goal as quickly as possible while sticking to your coding morals. Design but don’t linger on design, you need to ship, and design can always be re factored. Optimize code if convenient, but don’t go out of your way. Avoid investing yourself into things that do not have reasonable answers.

Our Reinvented Wheels Are Often Squares

We should also avoid rewriting software. If the first developer did a bad job with a solution, how likely is it that we can really do better; especially considering the existing software may have years of fixes and usage represented in it’s code base? We need to improve code to a better product. It’s easier to write code than to read and understand code. It’s the easy way out to write new solutions, and often we build a worse solution. Do the work to reap the insights from an existing solution both via it’s shortcomings and its time-tested working parts.

Time and Focus are Limited Resources

We have a tendency to think there is always more time. You learn the more you mature the lesson that both focus and time are limited resources, and as you build more complex products both of these become more precious. Avoid falling into the trap of always thinking you can do something more in your extra time, or that you can just “buckle down” and get more done. Be reasonable with yourself, and stop deluding yourself into thinking there is always a “later”.


May the Source be with You,

Joshua Enfield

Link

Safe Code and Unsafe Code - Eric Lippert

This guy is one of my software idols.