jump to navigation

Getting to know Entity Framework – Table Per Type (TPT) Inheritance March 2, 2011

Posted by reddogaw in Uncategorized.
add a comment

I’ve been using LINQ to SQL for quite sometime because until recently I’ve had little need for Entity Framework’s EDMX mappings because we’ve generally been content with one-to-one table to entity mappings that LINQ to SQL gives us. However, with the addition of a new module to our project that is close, but not quite the same as an existing module we’ve had to call on Entity Framework to generate a proper inheritance hierarchy.

What is Table Per Type (TPT) inheritance?

It’s the method of defining your SQL schema such that each sub-type will live in it’s own table and share a primary key column with the parent table via a foreign key. That is, the primary key on the child table will be a one-to-one mapping to the parent table.

SQL Schema with an Employee joined one-to-one with Contact to demonstrate TPT inheritance

Shows how Employee will be related to Contact using a common primary key linked via a foreign key

Note: My naming standard is to name primary key’s “Id” and any foreign key’s with the table name (and potentially a verb) prefixed. E.g. “ContactId”.

Generate your EDMX file

Start your .edmx (ADO.NET Entity Data Model Designer) as described in many tutorials

Initially your tables will come through as associated entities. This is where we’ll pick up…

  1. From the Toolbox, pickup the “Inheritance” and join from Employee to Contact.
  2. Delete the Association joining the two tables.
  3. Now you’ll find that if you try to compile, you’ll get:

    Error 3024: Problem in Mapping Fragment starting at line 79: Must specify mapping for all key properties (Contact.Id) of the EntitySet Contact.

    The problem here lies in the fact that our Employee’s ContactId didn’t auto-map to the Contact’s Id field.

  4. Click on Employee, open the Mapping Details view and select the Column Mapping for ContactId
  5. Change the mapping using the drop down to “Id”. The drop down now includes all properties from Contact as well.

    Mapping Columns in Mapping Details editor

    Map the Employee's ContactId column to the Contact's Id column

  6. Again, you’ll find that if you try to compile, you’ll get:

    Error 11009: Property ‘ContactId’ is not mapped.

    Wait… Sure it is! Didn’t we just map it to the Contact’s Id column. Ummmm? Well, we won’t need it anyway. So just delete it from the Employee entity’s diagram.

    Delete an entity field

    Delete the Employee's ContactId field

  7. Compile, and all is finally well!

Problems downloading .vhd from Microsoft February 11, 2009

Posted by reddogaw in Uncategorized.
add a comment

Thought I’d post since this seems to be common enough, but I downloaded the latest SharePoint WSS 3.0 Virtual PC files only to have found I wasted my time since during the extraction I was presented with:

Packed data CRC failed in WSS-July2008.vhd

And the first bit is common for other VHD downloads too I believe – including the CRM VPC download, the MOSS download, and others.

Turns out the problem is downloading the file in resume mode and/or multiple connections. Thus, don’t use a download manager!

After I went back and re-downloaded (all 4 GB) using just the native IE download (and being sure to start again after a connection dropped out half way through) I got a clean download and successful rar extraction…

Hopefully that helps some frustrated soul…

Love,
Reddog.

Gotcha – Implicit sitemap localization resx belong in App_GlobalResources January 14, 2009

Posted by reddogaw in ASP.NET, Localization, Tips.
Tags: , ,
add a comment

When providing a multi-lingual website you’re likely to want to translate your sitemap files for your menus, footer links, etc.

I tend to store most of my translations using implicit localization techniques whereby you would put your resx in a relative App_LocalResources folder. However, when it comes to SiteMap localization don’t be fooled!

They must all go into the site relative App_GlobalResources.

[References: tiny type on step 3 of How to: Localize Site-Map Data and detailed instructions Tutorial: Localize a Web.sitemap file]

Using validator controls with checkboxes November 8, 2008

Posted by reddogaw in ASP.NET.
Tags: , , , ,
add a comment

When attempting to set the ControlToValidate property of any of the built-in validator controls (particularly CustomValidator) you’ll find you get the following runtime error:

Control ‘agreeTermsCheckBox’ referenced by the ControlToValidate property of ‘agreeTermsRequiredTrue’ cannot be validated.

In my case, I want to validate a checkbox for my website’s Terms and Conditions to ensure that it’s been marked as true before creating the account.

It turns out that CheckBox controls (and also HiddenField controls) are not supported by the BaseValidator’s ControlToValidate field. Presumably, since this is due to the fact that the value cannot be reliably read or distinguished from an empty value.

Thus there are two possible work arounds:

  1. Don’t use the ControlToValidate property and in your CustomValidator’s ClientValidationFunction and ServerValidate function.
  2. Use an intermediate control to store the checkbox’s check state and validate against it using a CompareValidator.
  3. Use an intermediate control to be able to get a reference to the checkbox to validate using a CustomValidator.

The first is far from ideal since you’ll have to use a hardcoded reference to your checkbox’s ClientID and ID in order to check it. The second is OK, except that you have to add client side events to your checkbox which may be a tiny bit more fiddly to get it functional.

The third is what I ended up doing.

In my case, I added a TextBox to the form whose Text property stores the ClientID of the CheckBox. Ideally I would have used a hidden field, but apparently they’re not supported either! I hide the TextBox using it’s CssClass (with display: none and visibility: hidden).

<asp:TextBox ID="agreeTermsCheckBoxIdField" runat="server" CssClass="hidden" Text='<%# Container.FindControl("agreeTermsCheckBox").ClientID %>' />

Then I add my CustomValidator with ControlToValidate pointing at this new TextBox.

<asp:CustomValidator ID="agreeTermsRequiredTrue" runat="server" Text="*" ControlToValidate="agreeTermsCheckBoxIdField" EnableClientScript="true" ClientValidationFunction="ValidateCheckBoxIsTrue" OnServerValidate="agreeTermsRequiredTrue_ServerValidate" />

The great thing about this, is that now the Value property of the validation function’s arguments will automatically be set to the ClientID of the checkbox making our client side function’s ability to locate the CheckBox nice and easy!

function ValidateCheckBoxIsTrue(source, clientside_arguments) { // Find the checkbox we're validating // In this case, the client ID of the control is stored in a hidden field // whose value is passed in via the ControlToValidate's property. var checkBoxToValidate = document.getElementById(clientside_arguments.Value); // Default value - let it pass... clientside_arguments.IsValid = true; // Set IsValid according to checkbox state. if (checkBoxToValidate != null) { clientside_arguments.IsValid = checkBoxToValidate.checked; return; } }

Dependency Injection into Pages and UserControls using Unity November 8, 2008

Posted by reddogaw in ASP.NET, Unity, WCSF.
Tags: , , , ,
add a comment

Edit (8 Nov 2008): Turns out… Not so much the async postbacks, and more to do with a FormView not having initialized it’s controls at the point in time in the page lifecycle (on PostBack only, after Page Init). This can be tested by using a quick watch on the protected Control.ChildControlsCreated property in the debugger. Additionally, by touching the Controls collection of the formview it actually causes the controls from within the template to not be loaded on the postback.

The solution (and new rule!): Do NOT touch FormView.Controls at all (to get either Count or iterate) instead test beforehand using Control.HasControls() before iterating. (Note: the case where user controls are inside a FormView template will means those controls won’t have their dependencies injected on PostBack)

From the original source code I replaced the original recursive control tree function with some additional functionality including the fix for HasControls() and to make it more reusable later:



// Build up each control in the page's control tree private void OnPageInitComplete(object sender, EventArgs e) { Page page = (Page)sender; IUnityContainer container = IocContainer; // Lambda delegate for controls to apply BuildUp on // and delegate for injection. Func<Control, bool> shouldApplyFunction = (c => (c is UserControl || c is Page || c is MasterPage)); Action<Control> injectIntoControl = (c => container.BuildUp(c.GetType(), c)); // Recurse tree RecurseControlTree(page.Controls, injectIntoControl, shouldApplyFunction); } private void RecurseControlTree(ControlCollection controls, Action<Control> actionToApply, Func<Control, bool> shouldApplyActionFunction) { foreach (Control child in controls) { // Iterate children first if (child.HasControls()) { RecurseControlTree(child.Controls, actionToApply, shouldApplyActionFunction); } // Apply function if (shouldApplyActionFunction(child)) { actionToApply(child); } } }

Hope that helps someone!!! (End edit)

Having recently started playing with using the Unity framework for dependency (but without going the whole hog and using the WCSF libraries), I was after a HttpModule or Handler of some kind that will deal with the injection of dependencies into my pages, user controls, and master pages.

I found a lovely little helper module and pattern by Espresso Fueled Agile Development that almost did the trick perfectly… With one exception… When doing an AJAX postback within an UpdatePanel, things went screwy.

Now, I’ve no idea why, but the problem actually lies with the attempt to traverse the Controls collection for all controls within the page. It turns out that spmething or other doesn’t like it’s ControlCollection to be scrutinized during an AJAX-intiated postback. So don’t! (As long as you don’t need any injected dependencies in your handlers!)

As far as I’m aware the only way to know if a postback is from an AJAX control this early in the pipeline you’ll have to call ScriptManager‘s IsInAsyncPostBack property. Thus in the UnityHttpModule, replace OnPageInitComplete with the following:

// Build up each control in the page's control tree private void OnPageInitComplete(object sender, EventArgs e) { Page page = (Page)sender; IUnityContainer container = IocContainer; var currentScriptManager = ScriptManager.GetCurrent(page); if (currentScriptManager == null || !currentScriptManager.IsInAsyncPostBack) { foreach (Control c in GetControlTree(page)) { container.BuildUp(c.GetType(), c); } } }

As a side, another thing to remember is that since it’s difficult to get a hook into the instantiation of pages, you need to use Dependency declarations on the public properties rather than Injection attributes on constructors.

Love,
Reddog.

Gotcha – Reporting Services reports won’t page any more September 4, 2008

Posted by reddogaw in Reporting Services, SQL Server.
Tags: , ,
add a comment

A recent discovery of a “feature” in Microsoft Reporting Services reports is that when you toggle visibility for a report element, for example, a table inside a rectangle, then you’ll find that paging on that element no longer works.

This is exactly what happened in a recent report where I toggled visibility of the grid in the event that there is no data and displayed an alternative friendly message instead from a separate rectangle with the opposing visibility expression. In my case, this became evident when I used the ReportViewer component on the page already heavy with content and it took ages to render a report.

Unfortunately this is something in the box and thus not bypassable. See the thread here.

Thus the only option is to turn off the visibility expression…

Filter tables and rename associations generated by SqlMetal using Powershell August 26, 2008

Posted by reddogaw in PowerShell, SQL Server.
Tags: , , , , , ,
add a comment

One of the most irritating features I initially found when using LINQ to SQL diagrams was the all or nothing approach for managing tables and views in a way that allows quick regeneration after database schema changes. Out of the box (VS2008 pre-SP1) you have two options:

  1. Build LINQ to SQL dbml mapping files by hand – dragging tables, views, and stored procedures from the Server Explorer onto the designer surface. This works great and has the added bonus of being able to rename tables, columns, associations, and of course remove them all as well. The designer surface entities are based on definitions in a .dbml file, and the entity code is generated into a .designer.cs file.
  2. Use SqlMetal.exe to generate the .dbml and class for you. I typically use two commands and drop them into a “GenerateDbml.bat” file so that on a schema change we can just double click and go.

My preference definitely lies with the latter since despite the ability to do otherwise the object relation model should be close to the relational database model as possible. On the cosmetic side of things it ensures your naming standards are kept consistent, on the database level it ensures that all column types and foreign key mappings are well thought out and meaningfully names, and on the code level it means that the amount of custom code in the domain model is kept at a minimum.

My GenerateDbml.bat usually does it in two steps and looks like this:

  1. sqlmetal /server:“.” /database:“Northwind” /dbml:NorthwindDb.dbml /context:NorthwindDataContext /namespace:LeftLobeLogic.Northwind.Domain /pluralize  
  2. sqlmetal /code:NorthwindDb.designer.cs NorthwindDb.dbml   
  3. pause  

One thing to notice is that the generated .dbml file is XML. Now, XML may not be great but at least it gives developers a tonne off standard API for dealing with it. In this instance we’re going to leverage it to quickly prune out unwanted tables and relationships before generating our object model and we’re going to make use of PowerShell to do it!

As David blogged recently, I’ve created a generic .ps1 script to handle our calling of SqlMetal as we did in our .bat file. We’ll also parameterise the script to help ditch all our Northwind references and help us set up some usual naming conventions. Our settings for cleaning our generated DBML by removing tables, renaming tables, and renaming associations are stored in an XML file.

Now you can simply call it with:

  1. .\Call-SqlMetal.ps1 -database Northwind -cleanerXml “NorthwindCleanSettings.xml” 

To download the full source code you’ll find it in Left Lobe Logic’s GoogleCode project (it’s SVN). Note: The code is substantially refactored from that posted by Dave.

I’ll try to share some of the individual learnings from the script in later posts.

Enjoy!

Error while installing SQL Server 2005 – Native Client cannot be found August 26, 2008

Posted by reddogaw in SQL Server.
Tags: , , ,
47 comments

Recently while installing SQL Server 2005 Standard Edition on a “fresh” server I found that crazy thing just wouldn’t install!

I was getting the following error:

“An installation package for the product Microsoft SQL Server Native Client cannot be found. Try the installation again using a valid copy of the installation package ‘sqlncli.msi'”

Most cryptic… Considering my installation package is the disc given to me by Microsoft and I’m positive that the sqlncli.msi file is right there.

However, the actual issue appears to be as simple as an inability to upgrade an old Native Client edition. (Thanks to Jose for pointing it out). To resolve, use the Add / Remove Programs panel to uninstall an existing SQL Server Native Client installation and you should be good to roll.

Note that this appears to be common for other editions of SQL Server 2005 too.

PowerShell script modularization June 10, 2008

Posted by reddogaw in PowerShell.
Tags: , , ,
add a comment

PowerShell has been a sneaky beast whose power is just now starting to creep into my programming psyche. Being the good OO citizen that I am I immediately started wanting to modularize my PowerShell scripts and functions for maximum re-use and easy organisation.

There are a number of ways to modularize your PowerShell including (but probably not limited to!):

Love,
Reddog.

Getting the current script directory in PowerShell June 4, 2008

Posted by reddogaw in PowerShell, Tips.
Tags: , , , ,
2 comments

One thing to remember when developing PowerShell scripts is that your current value in $PWD is the process’ current directory not the one where the script was stored.

In my desire to break out a few functions into a common file in order to load them “dot-sourced” (so that the functions become global) I realised I needed help!

In comes a new function from the PowerShell Team Blog. It’s going straight into my PowerShell Profile: Get-ScriptDirectory. It does exactly as the name says, it gets the current directory that script is executed from.

<snip>

  1. function Get-ScriptDirectory
  2. {
  3. $Invocation = (Get-Variable MyInvocation -Scope 1).Value
  4. Split-Path $Invocation.MyCommand.Path
  5. }

</snip>

So that now from my script that requires the Common-Functions.ps1 (located in the same directory) I can call:

  1. # Load up our common functions
  2. $commons = Join-Path (Get-ScriptDirectory) “Common-Functions.ps1″;
  3. . $commons;

Love,
Reddog.

Follow

Get every new post delivered to your Inbox.