WordPress Blog and ASP.NET MVC Integration

5 12 2010

This was too easy/cool to keep tucked away. I wanted the most cheap and cheerful way of listing the top 10 blog posts, into a larger, containing website I am building for a client. I started out by Googling (googeloper) “wordpress integration c#”. Unexpectedly, nothing obvious I was looking for surfaced. Most solution seemed to rely on having WordPress installed physically beneath the website, or proposed options such as FTP based integration. Yuck! No. I just wanted the 10 latest posts in real time (i.e. as of now). Hold on I thought, that’s exactly what feeds have been designed to do. Blog engines (such as WordPress) have excellent feed support, including a REST based API, support for RSS and Atom. Wowzers.

There is plenty of good information about those topics elsewhere. I really just wanted to demonstrate how simple it is to query such a feed taking a server side approach (assuming your web server has Internet HTTP GET access to the feed). To do this, I used System.Net.WebClient, LINQ to XML and ASP.NET MVC 3. The feed I am consuming is for a WordPress blog, but there is absolutely no dependency/coupling that the blog must be WordPress based. It could for example, be a Blogger feed.

Another factor that should be given consideration is whether taking a server side (e.g. ASP.NET) approach versus a client side (e.g. jQuery) approach is a better fit for you. If for example search engine visibility (SEO) of the integrated blog content is important to you, a server side approach may be a better fit.

1. Using a browser review the feed you’re targeting. For example, if the (WordPress) blog is http://example.com then have a look at http://example.com/feed/ or http://example.com/feed/atom. Below is a sample of the first chunk of Atom feed for this WordPress blog.

<?xml version="1.0" encoding="UTF-8"?><feed
  xmlns:georss="http://www.georss.org/georss" xmlns:geo="http://www.w3.org/2003/01/geo/wgs84_pos#"  >
	<title type="text">benCode</title>
	<subtitle type="text">Ben Simmonds: BizTalk Server Guy in Sydney</subtitle>


	<link rel="alternate" type="text/html" href="http://bencode.net" />
	<link rel="self" type="application/atom+xml" href="http://bencode.net/feed/atom/" />

	<generator uri="http://wordpress.com/">WordPress.com</generator>
<link rel="search" type="application/opensearchdescription+xml" href="http://bencode.net/osd.xml" title="benCode" />
<link rel="search" type="application/opensearchdescription+xml" href="http://wordpress.com/opensearch.xml" title="WordPress.com" />
	<link rel='hub' href='http://bencode.net/?pushpress=hub' />
		<title type="html"><![CDATA[SSO Configuration Road Block]]></title>
		<link rel="alternate" type="text/html" href="http://bencode.net/2010/11/24/sso-configuration-road-block/" />
		<category scheme="http://bencode.net" term="BizTalk" /><category scheme="http://bencode.net" term="SSO" /><category scheme="http://bencode.net" term="Error" />		<summary type="html"><![CDATA[Recently I’ve had the need to setup a BizTalk Server 2006 R2 virtual machine. Quietly confident about my experience with this version of BizTalk, I jumped in head first to *quickly* get a simple single server based installation configured on a 32-bit VMWare based VM. Lesson learned today…never, ever underestimate the obscure errors that BizTalk [...]<img alt="" border="0" src="http://stats.wordpress.com/b.gif?host=bencode.net&amp;blog=2452880&amp;post=109&amp;subd=bencode&amp;ref=&amp;feed=1" width="1" height="1" />]]></summary>
		<content type="html" xml:base="http://bencode.net/2010/11/24/sso-configuration-road-block/"><![CDATA[<p>Recently I’ve had the need to setup a BizTalk Server 2006 R2 virtual machine. Quietly confident about my experience with this version of BizTalk, I jumped in head first to *quickly* get a simple single server based installation configured on a 32-bit VMWare based VM.</p>

2. Below is a snippet of simple C# that leverages LINQ to XML to parse an Atom feed. This code is in a controllers action method, because of ASP.NET MVC, but again this code has no requirement for you to be using MVC. Side note: LINQ to XML makes working with XML much more fluid for the C#/VB developer. Very very nice to use!

public ActionResult Index()
  XNamespace xsd = "http://www.w3.org/2005/Atom";
  var client = new WebClient();
  var feed = client.DownloadString("http://www.bencode.net/feed/atom");
  var document = XDocument.Parse(feed);
  var blogs = 
    from e in document.Descendants(xsd + "entry")
    select new BlogModel()
      Title = (string)e.Element(xsd + "title"),
      Content = new HtmlString((string)e.Element(xsd + "content"))
  return View(blogs);

3. A simple MVC 3 Razor view:

@model IEnumerable<MvcApplication.Model.BlogModel>

    View.Title = "Blogs";
    Layout = "~/Views/Shared/_Layout.cshtml";


@foreach (var blog in Model) {
<div class="blogpost">

4. Downloading, streaming, parsing and rendering this feed should be considered an expensive operation. Something that you probably don’t want to happen for each request that comes in for your page/site. To cache this entire “pipeline” of work, I went with some ASP.NET MVC output caching, by marking up the controller action with the following custom attribute:

[OutputCache(VaryByParam="none", Duration=60)]
public ActionResult Index()

The result:

Back To Basics: Managing Databases

5 06 2010

On a new clients site the other day, observed that over time the more companies I work for the deeper my knowledge for applying effective work practices becomes. In other words, over time you see things that work well, and things that don’t. I’m talking about simple practices that when applied to teams result more quality and/or efficient software.

Some companies I’ve worked for have demonstrated mature ITIL based change control processes with dedicated change control teams and separate environments for development and testing, but sadly over time they have become so out of sync with the production environment that they provide absolutely no indication of what will happen in the real production environment. And management wonder why production deploys are so unsuccessful having spent gazillions on ITIL training and manuals. A simple problem with a simple solution, but in the real world can be rarely practiced.

Another that I see regularly is an effective way of managing database change across a development team. Here I present a technique so primitive and proven, that there is no requirement that you have the database tooling that ships with the likes of Visual Studio Team System (VSTS) 2008 Database Edition or Redgate SQL Compare. If a team has such tools at their disposal then great use them—they can make life easier, but I am dumbfounded by the number of environments I see where database change is completely unmanaged.

Yes, databases and their associated artefacts (functions, triggers, message broker queues and so on) should be managed, and versioned. Again a simple problem with a simple solution, but in the real world tends to be practiced poorly.

In the source repository that the team uses, create a directory hierarchy that implies some sort of sequence (e.g. prefix with a numeral). Start off with scripting the database and its requirements, such as filegroup options and collation types etc. Remove any code generated guff, to keep the scripts as clean and as readable as possible. Then move on to tables, and then objects that work with the tables such as foreign keys, triggers, procedures, functions. An example structure could look like this:

– “01Database”
– “02 Tables”
– “03 Foreign Keys”
– “04 Triggers”
– “05 Stored Procedures”
– “06 Functions”
– “07 Queues”
– …
– “10 Data”

Over the lifecycle of the project this structure should be completely populated with the necessary artifacts to build the target database from scratch. No restoration of backups needed.

Because the number of scripts contained in a single directory could become overwhelming with time, a copy of the below batch script “all.bat” could be placed in each directory that enumerates and concatenates every “.sql” file in the containing directory to produce one large sql script “all.sql”. Running in 150 stored procedure scripts then become a simple matter as running in “all.sql” contained in the stored procedure hive.

@echo off

@rem type NUL>"_all.sql"
del /F /Q "_all.sql"

for /f %%a in ('dir /b *.sql') do (
type %%a >> _all.sql

When it comes to scripting the data (lookup data and sample data should be versioned), I find it hard to pass up the simplicity of the sp_generate_inserts gem I found a few years ago. Its basically a stored procedure that get created in your master database (therefore resolvable in any db’s on the same instance), that provides a rich set of options for scripting your data (e.g. EXECUTE sp_generate_inserts footable, @ommit_identity=1).

Subversion Repository

17 04 2010

I’ve been working on a number of personal projects lately and need a reliable, fast and possibly multi-user source control. There are many of options available, but for me VS.NET integration is a must. Without focusing too much on my rational for choosing SVN, there are two fairly mature and rich VS.NET providers; AnkhSVN and VisualSVN. I use VisualSVN and it rocks.

Follows is the fastest path to getting a repository up and running over the native SVN protocol (which by default listens on port 3690).

  1. Download pre-built binaries from CollabNet (I had TortoiseSVN compatibility issues with other distributions such as SlikSVN).
  2. Create the repository: svnadmin create “c:\svn\repository”
  3. Edit conf/snvserve.conf. Uncomment the lines (anon-access = read, auth-access = write, password-db = passwd)
  4. Edit conf/passwd. Register users and their passwords here.
  5. Register Windows services (daemon): sc create svnserver binpath= "C:\Program Files (x86)\CollabNet\Subversion Server\svnserve.exe –service -r c:\svn\repository"
    displayname= "Subversion" depend= Tcpip start= auto

That’s it! Just connect to svn://localhost using TortoiseSVN and/or the other VS.NET providers. Remember to open up port 3690 to make the repository available over a network.

I find setting up the TTB (tags/trunk/branches) style structure initially pays off downstream, when activity like tagging or branching starts taking place.

Technorati Tags: ,