All posts by David Veksler

Basic AJAX with MVC3 & Razor

Suppose you have a page which requires you to load content inline in response to some action.  In the below example, you have a filter and a “preview” button which needs to show some sample data.

You could have the button do a post and have the MVC handler parse the form fields and render the view with the data you want to show.  But, you may have a complex page with many tabs and want to optimize form performance.    In such a case, you want to send just the form data for the current tab and inject the response into the current page.  Here is how to do that:

Here is the HTML of the relevant section of the “Export” tab:

<fieldset>        
 
<div class="btn green preview">
<a href="#">Preview</a></div>
<div class="clear"></div>
 
<span>&nbsp;</span>    
</fieldset>
 
<div class="btn orange submit">
<a href="#">Export to CSV</a></div>
<div class="clear"></div>

Note the two href’s with attributes of btn.  Here is the relevant JavaScript:

$(document).ready(function () {
 
    $('.preview').click(function () {
        $('#preview').html('<strong>Loading...</strong>');
        $.post('Home/Preview/?' + $(this).closest("form").serialize(), function (data) {
            $('#preview').html(data);
        });
    });
 
    $('.submit').click(function () {
        $('.submit a').html("Loading...");
        $(this).closest("form").submit();
        return false;
    });
 
});

The “Export” button submits the entire form.  The “Preview” button on the other hand:

  1. Shows the “loading” text.
  2. Serializes the content of the parent form
  3. Posts the serialized content to a url
  4. Renders the response from that URL in the designated div

Here is Preview.cshtml:

@{
    Layout =null;
}
@model  Models.ExportFilter
 
<h2>@ViewBag.Message</h2>
 
<strong>Name, Source, Email, DateAdded</strong>
<ul>
    @foreach (var item in Model.MarketingList)
    {
	<li>@item.FirstName, @item.Source, @item.Email, @item.DateAdded.ToShortDateString()</li>
    }&nbsp;
</ul>&nbsp;

Note that I am overriding the default Layout because I don’t want to show the normal header/footer. _Blank.cshtml contains solely “@RenderBody()”

The handler for the /Preview target is:

[HttpPost]
        public ActionResult Preview(ExportFilter filter)
        {
            var export = new EmailExporter(filter);
            List emailList = export.Process();
            ViewBag.Message = string.Format("{0:##,#} records to be exported", emailList.Count); 
            return View(filter); 
        }

Now when I click “preview”, I get a momentary “loading” screen and then the rendered view.

Customizing Work Items in TFS

A quick guide to customizing work items in Team Foundation Server:

1: Install Team Foundation Server Power Tools.

2:  In Visual Studio, connect to your TFS collection and open the Work Item Template definition on the server:

3: Select the work item template you want to edit:

4: Today, I added a “Pass/Fail” field to test case.  Click “New” to add a new field.

5: You can specify a number of field types such as text, number, dropdown, etc.  For a dropdown, specify a list of allowed values in the “Rules” tab

 

6: After you add a new field, you need to add it to the layout:

7:  You can also define rules for workflow transitions that depend on your new field:

Setting up continuous integration with TFS

This is a quick visual guide to setting up continuous integration with TFS 2010.

TFS is used to create build definitions for specific trigger criteria.  After the builds are copied to the drop folder, MS Build calls the MSDeploy publishing service to update the target website.

1: Configure a new TFS build controller and connect it to your TFS team project collection:

 

2: Add a new build definition:

3:  Choose continuous integration build triggers.  I setup up two triggers: one for a rolling build, and one to run every day at 3 AM.

4: Specify the build controller and staging location (build output folder) for the builds

5:  Now you need to copy the builds from the build folder to the web server.  We can do this using MSDeploy.

In the “MS Build Arguments”  field, I put:

/p:DeployOnBuild=True /p:DeployTarget=MsDeployPublish /p:MSDeployServiceURL=http://dailyodin.example.com /p:DeployIISAppPath=”/OdinDaily” /p:CreatePackageOnPublish=False /p:MsDeployPublishMethod=RemoteAgent /p:AllowUntrustedCertificate=True /p:username=MYDOMAINSERVICE_ACCOUNT /p:password=PASSWORD

Note that TFS will run unit tests by default.

(TODO: I need to figure out how to use integrated authentication so I don’t have to save the credentials in plaintext.)

 

Now I need to configure the website on the target server:

6:  Setup the integration web server.  Use the same application path as the DeployIISAppPath above.

7:  Install the web deployment tool 2.0 from http://www.iis.net/download/webdeploy

When you install the tool, make sure the Remote Agent Service is included in the install.

8:  The Remote Agent Service is manual by default.  Change it to automatic.

9:   That should be it.  Now you should be able to “Queue a New Build” or (depending on your build trigger) check in some code and have your website updated!

You should be able to see your builds in TFS by double-clicking on the build definition.  Any successful or failed builds will show up here:

 

Closing Notes:

The TFS server, build server, build drop location, and web server can all be separate machines.  Using web deployment, we can locate the web server anywhere on the Internet, so we can use this method for one-click deployment to production as well.  However, we probably don’t want to save the credentials for the production server in the build definition to avoid accidental one-click deployment to live servers!

 

Update:

If you just want to deploy the build output to a folder or file share, you can modify the build process template. Add a CopyDirectory to the step after RunOnAgent.  (Source)

 

Great image optimization tools for Windows and Mac

Over the last few weeks, I’ve experimented with image optimization tools. Using these tools, I have rapidly eliminated gigabytes of image data from thousands of images without any quality loss. Over time, this should translate to many terabytes of bandwidth savings.

Because these tools can be run in batch mode on thousands of images at a time, they are useful for optimizing large, existing image libraries.   They are lossless and designed for bulk mode, which means you can safely run them without any loss in image quality. But be careful: test on small samples first and learn their specialties and quirks.

An alternative to running them locally is to use Yahoo! Smush.it, which is an online service that  “uses optimization techniques specific to image format to [losslessly] remove unnecessary bytes from image files” using the same tools. The best way to run Smush.it is via the Yahoo! YSlow for Firebug, an add-on for the Firebug add-on for Firefox. (By the way, Smush.it renames GIF images to .gif.png when it shrinks them. I wrote a console app to rename them back to .gif.   Browsers only look at the image header to identify images, so it’s safe to serve up PNG images with a .gif extension.)

Mac:

ImageOptim Screenshot
ImageOptim

For OS X, all you need is ImageOptim, which optimizes JPEG, PNG, and GIF auto-magically.  Seriously awesome tool.  (Free.)

For lossy optimization, JPEGmini is amazing.  It uses knowledge of human visual perception to shrink JPEG’s up to 5X without visible quality loss.  (Semi-free.)

Windows:

RIOT (Radical Image Optimization Tool):

Though it has a batch mode, this is the best tool for optimizing single images, whether they are JPG, PNG, or GIF. I use RIOT to save every image I work on as well as to reduce the size of existing images that are too large. You can re-compress PNG and GIF images losslessly, but for JPG you want to save from the original file.

RIOT is available as a standalone version as well as a plugin for several image editors such as the excellent IrfanView.

 

RIOT - Radical Image Optimization Tool screenshot
RIOT – Radical Image Optimization Tool screenshot

The JPEG Reducer:

Run this tool in bulk on all your JPEG images to save ~10% of the file size. This is a GUI front end for jpegtran, which optimizes JPEG images by doing removing metadata and other non-display data from JPEG images. Because it is lossless, it is safe to run on all your image. It will ignore any files you add which are not really JPEG images.

The JPEG reducer screenshot
The JPEG reducer

PNG Gauntlet:

This tool is a front end for PNGOUT which will losslessly reduce the size of PNG images. Warning: if you add GIF or JPEG images to it, it will create PNG versions of those images. Sometimes you want to do this, but if not, don’t add images to the queue.

PNG gauntlet screenshot
PNG gauntlet screenshot

Let me know if you have other tools or ideas for image optimization.

 

In-memory cashing for an auto-complete list

When implementing an auto-complete control on your site, you may want to cache the results to keep the database queries to a minimum. Here’s a quick way to do that:

        public static Dictionary AutoCompleteHashList = new Dictionary();
        const string resultsFormatString ="{0}|{1}rn";
 
        private static string GetSuggestedResults(string s)
        {
            const int maxHashLengthToCache = 9;
 
            if (s.Length 
                            results.AppendFormat(resultsFormatString, recipe.Key, recipe.Value)
                );
 
            if (s.Length &lt; maxHashLengthToCache)
                AutoCompleteHashList.Add(s, results.ToString());
 
            return results.ToString();
        }

Stanza Catalog Update

I’ve continued enhancing the Stanza catalog I created earlier. New features include purchase links and Google Analytics tracking. I reverse-engineered the FeedBooks API and the AllRomanceBooks.com feeds to figure out how to do some of these things. Also I’ve been reading the OPDS spec (an industry-standard successor to the Stanza format) to add support for e-readers on other platforms. For example Aldiko officially supports OPDS, but it recognizes Stanza tags as well.  It crashes if you try to open a PDF file however – so it just needs to be sent the right mime type.

Here’s the snippet where I reference the custom Analytics class I added:

 p.StoreDescription += Mises.Domain.Mobile.Analytics.GetAnalyticsImageTag(this.request.RequestContext.HttpContext);
                            var content = new TextSyndicationContent(p.StoreDescription,
                                                                     TextSyndicationContentKind.Html);
                            item.Content = content;

And here’s how to add external links to a book info view:

 item.Links.Add(
                                new SyndicationLink(
                                    new Uri(string.Format("http://mises.org/store/Product.aspx?ProductId={0}&amp;utm_source=MisesCatalog", p.ProductId)), "alternate", "Purchase at the Mises Store",
                                    "text/html", 0));
Enhanced by Zemanta

10 simple questions to evaluate a software development organization

All credit and inspiration for the questions below goes to Joel Spolsky’s The Joel Test and Bill Tudor’s 2010 update.

My version differs in two aspects:  First, has just ten questions ordered from highest to lowest priority (in my personal opinion).  Second, in addition to yes/no questions, it asks open-ended questions intended to give more informative answers in an interview or an analysis of a software team.

  1. Do you use source control? What kind?  What are the requirements/checks to check in code (assignment to work items, unit tests, peer review, etc)?
  2. Do you use a bug database to track all issues? How do you track progress and manage change?
  3. Do you use the best tools money can buy? For example: MSDN/Apple Dev accounts, dual monitors, and powerful workstations.
  4. Do you have a dedicated QA team? Are they involved in the requirement/release management process?
  5. Do you fix bugs and write new code at the same time? How do you balance the two?
  6. Do programmers have quiet working conditions and team meeting rooms?  Describe them.
  7. Do you solicit feedback from end users or customers during the development process?  How is it used?
  8. Do you do a daily build? Do your builds include automated unit tests?
  9. Do you have a requirements management system? Is it integrated with your source control?
  10. Do you create specification/requirements documents? Do you do it before, during, or after writing code?

Bad developer ≠ novice developer

My post on the nature of programming seems to have struck a nerve. Many commenters pondered what makes a developer great. “Ka” thought that:

“You [are] not born [a] good or great programmer, you become one with time and study and hard work. At the beginning, everybody is a bad programmer.”

I disagree. Developers are not born “great,” but greatness does not automatically come with experience. Conversely, lack of experience does not make a developer “bad.” The difference between a great developer and a bad developer is not in their domain knowledge, but their methodology. The distinguishing mark of a great developer is that he codes consciously. Put another way, a good developer always knows why he is doing something. From the perspective of personal ethics, this requires intellectual courage and integrity.

Let me give an illustration of what I mean from personal experience:

When I got into Objective-C development, I was a “bad” developer. Most of my experience is with .Net code. Jumping into the iPhone dev world was intimidating. As as a result, I lacked the courage to learn the architecture. I tried to manipulate blocks of code found on the web without understanding what they were doing. I would copy and paste blocks of code and just change variable names. When things didn’t work, I would look for another block of code to substitute for the failing one, or enter “debugging hell” – running code over and over, making random changes and seeing if they worked. This is the hallmark of a bad developer – imitating without understanding. I kept this up for over a year. It’s not that I didn’t try to learn the language. I got several books and watched iTunes U classes. But the way I used the learning materials was to memorize blocks of code and look for places to stuff them into my code. I wasn’t actually learning the platform, just collecting samples. Some developers spend their entire careers this way. They carry collections of old code everywhere they go, and just grab chunks to insert into new programs. They may never select File => New File or File => New Project in their whole career.

After writing a lot of bad, buggy code, I drifted back to the comfort of .Net. Recently however, I decided to change my attitude. I started by downloading some iPhone code samples and open-source applications. I started in main.m and went through each line of code. If I didn’t understand exactly why a certain symbol was used or what it did, I looked it up. I spent a lot of time on Cocoa Dev CentralDeveloper.Apple.com, and Stack Overflow looking up things like the reasons why you would assign, retain, or copy a property, or when exactly you need to release an object (for alloc, new, copy or retain) or what you can do with respondsToSelector. There’s really not that much complexity to programming languages – but if you don’t take the time to learn how things work, they will always seem difficult and mysterious. If I had just looked this stuff up to begin with, I would have been far more productive. But, I was intimidated by the environment and tried to shortcut the learning process by imitating without understanding.

Understanding anything complex requires the courage and integrity to engage in difficult, exhausting mental effort. It’s tempting to cheat yourself. It’s easier spend more time in endless copying and debugging that take the effort to understand and create. In the short run, it saves time. But in the long run, developers who understand their craft are magnitudes more productive than the monkey see-monkey do coders. This is the difference between the unprincipled kind of laziness that trades understanding for time and the principled kind of laziness which saves time by understanding.

There’s no happy ending to my story — yet. The proof of a developer is in his work, not his book smarts, and I have yet to produce something to brag about.

For more on the traits of great developers, read these posts by Dave Child and micahel.

Get the MD5 hash of a file in .Net

MSDN has a page on how to get the MD5 hash of a string.

Easy enough, but if you follow their example exactly for a file and use File.ReadAllText() to get the string, you will get the wrong MD5 string for binary files. Instead, use File.ReadAllBytes() to bypass encoding issues. (This also applies to SHA1 hashing.)

private static string GetMD5Hash(string filePath)
        {
            byte[] computedHash = new MD5CryptoServiceProvider().ComputeHash(File.ReadAllBytes(filePath));
            var sBuilder = new StringBuilder();
            foreach (byte b in computedHash)
            {
                sBuilder.Append(b.ToString("x2").ToLower());
            }
            return sBuilder.ToString();
        }