Category Archives: code

.Net Graphics: drawing text on a bitmap

I use this code here.

Graphics backgroundGraphics;
backgroundImage = (Bitmap)Image.FromFile(AppDomain.CurrentDomain.BaseDirectory + "\images\Header.jpg");
backgroundGraphics = Graphics.FromImage(backgroundImage);
var font = new Font("Perpetua Titling MT", 24F, FontStyle.Regular);
backgroundGraphics.DrawString(authorname.ToUpper(), font, new SolidBrush(Color.FromArgb(100, 0, 0, 0)), 10, 5);

Now save – or output to the browser:

backgroundImage.Save(context.Response.OutputStream, ImageFormat.Jpeg);

Response.Redirect and 302 and 301 status codes

If you use Response.Redirect to direct users to a new location, you should be aware that it issues a status code of 302, which means that “the resource resides temporarily under a different URI.” If you intend to communicate that the resource has permanently changed locations, you should not use Response.Redirect. This is important for search engines and other crawlers that might need to know the definitive url.

To send a 301 redirect:

Response.Status = "301 Moved Permanently";
Response.StatusCode = 301;
Response.AddHeader("Location", url);
Response.End();

Update: ASP.Net 4.0 ads a Response.RedirectPermanent() method.

Set the admin color scheme for all WordPress users

INSERT INTO wp_usermeta
(
user_id,
meta_key,
meta_value
)
(
SELECT
id,
'admin_color',
'classic'
FROM wp_users
WHERE id  NOT  IN (SELECT user_Id FROM wp_usermeta WHERE meta_key = 'admin_color')
)
 
UPDATE wp_usermeta SET meta_value = 'classic' WHERE meta_key = 'admin_color'

SQL script to migrate from Movable Type to WordPress

While trying to migrate a large blog from Movable Type to WordPress, I found the built-in export and import functionality unable to handle volume of content on the blog or to properly preserve the primary keys needed for permalinks.

With assistance from Alvaro on the MisesDev list, we came up with the following MySql SQL script to import the entries directly from the Movable Type (5.01) database to WordPress (2.9.2). What would take many hours otherwise can be done in a minute or two. This is especially important if you don’t want to lose data during the time it takes to migrate the blog, as the script can be run immediately before the switch. This script also includes additional stuff like IP addresses and url-friendly names.
Continue reading SQL script to migrate from Movable Type to WordPress

Luhn algorithm validation via a CustomValidator control

The Luhn algorithm is a checksum used for credit cards and many other identifying numbers as a basic integrity validation check.  It’s useful for credit card forms because it avoids unneeded transaction attempts when card numbers are mis-typed.
It’s easy to add an account # Luhn checksum validation control to credit card forms in ASP.Net:
  • Add this method to your business logic.
  • Add a <asp:CustomValidator  … /> control with the error message, target control, etc.
  • Add and wire up a void ServerValidation method:
void ServerValidation(object source, ServerValidateEventArgs args) {
// use a RequiredFieldValidator to check for an empty value
 if (txtCardNum.Text == string.Empty) args.IsValid = true;
 args.IsValid = IsCreditCardValid(this.txtCardNum.Text);
}
 
protected override void OnInit(EventArgs e)    {
 base.OnInit(e);
 valLunCode.ServerValidate += ServerValidation;
}
  • (Optional:) For client-side code, use a Javascript version from here.

Correct photo orientation using EXIF data with C#

When processing photos, sometimes you want to re-orient the photo according the orientation recorded by the camera (such as the iPhone’s accelerometer) and stored in the EXIF meta data.  It’s easy to do:

// Rotate the image according to EXIF data
var bmp = new Bitmap(pathToImageFile);
var exif = new EXIFextractor(ref bmp, "n"); // get source from http://www.codeproject.com/KB/graphics/exifextractor.aspx?fid=207371
 
if (exif["Orientation"] != null)
{
RotateFlipType flip = OrientationToFlipType(exif["Orientation"].ToString());
 
if (flip != RotateFlipType.RotateNoneFlipNone) // don't flip of orientation is correct
{
bmp.RotateFlip(flip);
exif.setTag(0x112, "1"); // Optional: reset orientation tag
bmp.Save(pathToImageFile, ImageFormat.Jpeg);
}
 
// Match the orientation code to the correct rotation:
 
private static RotateFlipType OrientationToFlipType(string orientation)
{
switch (int.Parse(orientation))
{
case 1:
return RotateFlipType.RotateNoneFlipNone;
break;
case 2:
return RotateFlipType.RotateNoneFlipX;
break;
case 3:
return RotateFlipType.Rotate180FlipNone;
break;
case 4:
return RotateFlipType.Rotate180FlipX;
break;
case 5:
return RotateFlipType.Rotate90FlipX;
break;
case 6:
return RotateFlipType.Rotate90FlipNone;
break;
case 7:
return RotateFlipType.Rotate270FlipX;
break;
case 8:
return RotateFlipType.Rotate270FlipNone;
break;
default:
return RotateFlipType.RotateNoneFlipNone;
}
}

Maxing out HTTP compression in IIS7

1:  In IIS7 manager, enable dynamic and static compriession.   (This adds <urlCompression doStaticCompression=”true” doDynamicCompression=”true” /> to applicationHost.config)

2: Open C:WindowsSystem32InetsrvConfigapplicationHost.config and go to the httpCompression section. For both dynamicTypes and staticTypes: <mimetype=”*/*” enabled=”true”>

3:   Run appcmd in %systemroot%system32inetsrv
appcmd set config /section:httpCompression /[name=’gzip’].dynamicsCompressionLevel:10
appcmd set config /section:httpCompression /[name=’gzip’].staticCompressionLevel:10

(Set the value to 7,8,or 9 for less CPU usage)

Note: compressing static files prevents them from displaying incrementally while the rest of the file downloads in the background. This may be useful for viewing partially loaded PDF’s, text files, images, etc. It may be appropriate to enable/disable compression per-directory in some cases.

More information on IIS7 HTTP compression.

Multithreaded queue process with C# & BackgroundWorker

And now for something completely different:

This weekend, my mail server was slammed by a spammer using a rogue account to create hundreds of thousands of spam emails that jammed my outbound mail queue. Mixed with the spam were valuable customer emails, so I had to sort through all the mail ASAP and delete anything that wasn’t legit.

First I tried a simple loop that loaded each file and deleted it if it contained a bad string. But that was taking a while, so I made my filter multithreaded.

First, I load a list of files to process:

string[] files = Directory.GetFiles(directory);
Console.WriteLine(files.Length + " files.");

(You can iterate through the files instead, but I wanted to see how many files there are.)

I instantiate the class with the BackgroundWorker:

DeleteProcess DeleteProcess = new DeleteProcess();

Now, I loop through the files, checking each for spam:

            foreach (string mFile in files)
            {
                if (CheckBlacklist(mFile))
                {
                    DeleteProcess.filesToDelete.Add(mFile);
                    if (!DeleteProcess.worker.IsBusy)
                        DeleteProcess.worker.RunWorkerAsync();
                }
            }

Instead of loading the whole file, I just read it until I determine that it is spam. Since 99% of messages were spam, this went pretty quickly:

 private static bool CheckBlacklist(string mFile)
        {
            using (StreamReader reader = new StreamReader(new FileStream(mFile, FileMode.Open, FileAccess.Read)))
            {
                string line;
                while ((line = reader.ReadLine()) != null)
                {
                    if (line.Contains("NIGERIA") || line.Contains("Message Delivery Delay"))
                        return true;
                }
            }
            return false;
        }

(By using FileAccess.Read, I speed things up a bit.)

Now for the delete thread. Here is how it’s wired up:

public List filesToDelete = new List();
 
        public BackgroundWorker worker = new BackgroundWorker { WorkerReportsProgress = true, WorkerSupportsCancellation = true };
 
        public DeleteProcess()
        {
            worker.DoWork += worker_DoWork;
            worker.ProgressChanged += worker_ProgressChanged;
            worker.RunWorkerCompleted += worker_RunWorkerCompleted;
        }

The worker thread should get the first file name from the queue, delete the file, and then delete the filename list item:

 private void worker_DoWork(object sender, DoWorkEventArgs e)
        {
            while (filesToDelete.Count &gt; 0)
            {
                worker.ReportProgress(0, filesToDelete[0].Replace(Program.directory, string.Empty));
                File.Delete(filesToDelete[0]);
                File.Delete(filesToDelete[0].Replace(@"OutgoingMessages", @"Outgoing"));
                filesToDelete.RemoveAt(0);
            }
        }

When we’re done, we count the remaining files:

Console.WriteLine(Directory.GetFiles(Program.directory).Length + " files left.");

It’s possible to create a collection of BackgroundWorkers if you want to utilize multiple CPU’s, but the bottleneck in this case was the disk IO, so it wouldn’t help.