Support multiple JWT Authorities in a .NET Core application

This post describes how you can support multiple JWT Authorities in your (ASP).NET Core application and how you can select the correct authority for each request.

This post doesn’t cover the integration with an OIDC service or proxy. It assumes there is a jwt-token on the request and your application uses this token to perform authorization based on some policies you’ve defined in your application.

Suppose your organization is upgrading to the new kid on the block: Azure B2C. During this switch you want to support your old JWT authority and the new one. It makes your application more robust and a rollback scenario is easier or even completely unnecessary.

Once you’ve seen the code, it’s pretty straightforward, but it took me some time to actually implement it and understand it. Mostly because of me lacking the raw intelligence to make sense of all the technical terms, wording, moving parts and the overall architecture of how OIDC authentication and JWT authentication play together.

For clarity’s sake I’m going to add the code in the Program.cs, but you could/should move it to it’s own IServiceCollection extension method. As you’re supposed to as the super-senior, enterprisy, abstraction-loving developer.

First, add the authentication:

var auth = services.AddAuthentication(options =>
{
    options.DefaultScheme = "AZUREB2C_OR_LEGACY";
    options.DefaultChallengeScheme = "AZUREB2C_OR_LEGACY";
});

The value of DefaultScheme (and DefaultChallengeSchem) can be ANYTHING! I didn’t know this at first. I assumed it had to be a pre-defined name. Something like JwtBearerDefaults.AuthenticationScheme. Took me some time to recover from this insight. Best is to give this a name every next developer can understand. Don’t worry about a long name. You don’t have to type it that often. Put it in a const and profit! I’ve a thing with naming. I like clear names.

Next up, we need to add/configure the 2 (or more) JWT bearers. I’m reading the JWT Authorities from the appsettings.json:

"JwtAuthorities": [
  {
    "Name": "adfs",
    "Issuer": "https://adfs.yourdomain.com/adfs"
  },
  {
    "Name": "azure-b2c-flowname",
    "Issuer": "https://account.yourdomain.com/tfp/yourtenantname.onmicrosoft.com/azure-b2c-flowname/v2.0/"
  }
],

Add the JWT Bearer config:

var jwtAuthorites = configuration.GetSection("JwtAuthorities");
foreach (var jwtAuthority in jwtAuthorites.GetChildren())
{
    var name = jwtAuthority["Name"];
    var issuer = jwtAuthority["Issuer"];
    auth.AddJwtBearer(name, options =>
    {
        options.RequireHttpsMetadata = false;
        options.SaveToken = true;
        options.Authority = issuer;
        options.TokenValidationParameters = new TokenValidationParameters
        {
            ValidateIssuerSigningKey = false,
            ValidateIssuer = false,
            ValidateAudience = false,
            ValidateLifetime = false,
            ValidateActor = false
        };
    });
}

Obviously, tweak this to your needs. I’ve just added them here for clarity.

Now we’ve added the JWT authorities, but we haven’t specified when to use which one. This can be achieved by adding a so called PolicyScheme to the Authentication pipeline. I was thrown of by this name. I’m not a native english speaker and I didn’t link the word ‘Policy’ to my intention of ‘deciding which JWT handler to use for a request`.
Please don’t judge me.

After this great realization, it was pretty straightforward. Here is the code with some explanation:

auth.AddPolicyScheme("AZUREB2C_OR_LEGACY", "AZUREB2C_OR_LEGACY", options =>
{
    var fallbackScheme = jwtAuthorites.GetChildren().First()?["Name"];
    options.ForwardDefaultSelector = context =>
    {
        string authorization = context.Request.Headers[HeaderNames.Authorization];
        if (!string.IsNullOrEmpty(authorization) && authorization.StartsWith("Bearer "))
        {
            var token = authorization.Substring("Bearer ".Length).Trim();
            var jwtHandler = new JwtSecurityTokenHandler();
            return jwtHandler.ReadJwtToken(token).Claims.FirstOrDefault(c => c.Type == "tfp")?.Value ??
                   fallbackScheme;
        }
        return fallbackScheme;
    };
});

A custom policy scheme (which we are dealing with here) needs a name and a displayname. Not sure what the displayname is used for, but let’s not bother with it.

I’ve defined a fallback scheme in case the logic fails, but in theory this should never happen. One could (should?) also throw an exception as we’re dealing with a JWT token we can’t handle.

After setting the fallback scheme, we start by configuring the ForwardDefaultSelector. This is a .NET construct that is used to select the correct policy to use for the current request, based on logic we feed it.

Our logic first reads the token from the ‘Authorization’ header (HeaderNames.Authorization). If it’s not null or empty we strip of the ‘Bearer ‘ part and pass in the the raw JWT token in the JwtSecurityTokenHandler class. This class parses the token and extracts useful information. In this case we’re looking for the tfp claim in the token. TFP stands for ‘Trust framework policy’ and contains the name of the policy that was used to acquire the token in Azure B2C. We then return this name as the name of the Policy to use for this request. In this case that is the name as we configured it in the appsettings.json. So make sure that these names match! And the policy was added and configured in the previous step: AddJwtBearer (name, options =>...

And that’s all there is to it! So it reads the token from the request, looks for the tfp claim and based on that name it selects the correct policy as added to the authentication pipeline.

You logic can be different of course. You could even select a different policy based on the URL since you have access to the complete request object. But be careful: it’s a potential performance disaster waiting to happen if you put too much logic in here.

Hopefully this post has helped you to implement your own ‘multiple JWT authorities selector logic thingy’ 🙂

Resources I’ve used to write this post and to build this ‘authority selector’:

A Source Generator for your appsettings.json

Recently Microsoft introduced a new feature for .NET called ‘Source Generators’. It’s still in preview and will (probably) be released with .NET Core 5.

Source Generators seem to excite a lot of people. So what are Source Generators exactly? A short answer could be: Source Generators can add code during compilation-time. If thats not satisfying, check out the official blog post from Microsoft. Or check one of the samples described in this blog post.

I decided to give it a go and wanted to write a Source Generator that generates POCO’s for your appsettings.json. .NET core introduced stronly typed configuration, but it still required one to write the classes manually. E.g. this piece of config requires this class:

  "RemoteService": {
    "BaseUrl": "https://url/to/service",
    "DisplayName": "My Service"
  },
    public class RemoteService
    {
        public string BaseUrl { get; set; }
        public string DisplayName { get; set; }
    }

This seemed ‘automatable’, so off we go. Our objective is to write a Source Generator that generates these classes for us. Whenever we add a property in the appsettings.json, we want our configuration-pojo’s to update.

In order to write a Source Generator you need to have Visual Studio 2019. (This will change in the next .NET 5 release)

We start by implementing an interface called ISourceGenerator. This interface has 2 methods, but we are only interested in the Execute method:

    [Generator]
    public class GeneratedConfigClasses : ISourceGenerator
    {
        public void Execute(GeneratorExecutionContext context)
        {
           // Implementation
        }
    }

In the Execute method we have access to the compilation context which allows us to add code to the current compilation. The compilation can be visualized as follows: (borrowed from https://devblogs.microsoft.com/dotnet/introducing-c-source-generators/):

The real implementation can be found on GitHub. Some things to note:

  • Your generator must be decorated with the [Generator] attribute
  • The actual source-code is added to the compilation with this method: context.AddSource("MyAppConfig", SourceText.From(sourceBuilder.ToString(), Encoding.UTF8));

We want our generator to generate code for appsettings, but also for appsettings.Development.json. And possibly for more files. So I implemented a merging-strategy that merges appsettings files. If 2 config-files have the same string/boolean/int key, it’s easy to know which one to choose (it doesn’t matter ;)). But if there are 2 settings with nested settings we choose the setting with the most nested-settings. It’s very basic, but it seems to work ok.

In order for our Source Generator to know which appsettings files to use, we have to specify this when registring our Source Generator. In your target projectfile (.csproj) you have to add an ItemGroup with AdditionalFiles:

<Project Sdk="Microsoft.NET.Sdk.Web">

  <PropertyGroup>
    <TargetFramework>net5.0</TargetFramework>
  </PropertyGroup>

  <ItemGroup>    
    <ProjectReference Include="..\ConfigGenerator\ConfigGenerator.csproj" OutputItemType="Analyzer" ReferenceOutputAssemly="false" />
    <PackageReference Include="System.Text.Json" Version="5.0.0-rc.2.20475.5" />
  </ItemGroup>

  <ItemGroup>
    <AdditionalFiles Include="appsettings.json" />
    <AdditionalFiles Include="appsettings.Development.json" />
  </ItemGroup>

</Project>

Our SourceGenerator can read these files from the context:

foreach (var configFile in context.AdditionalFiles)
{
   //...
}

After reading and merging the JSON files, we deserialize it into a Dictionary<string, object> and then generate the actual source code. See the source in GitHub for the details. The last line of our Generator adds the generated code to the compilation:

context.AddSource("MyAppConfig", SourceText.From(sourceBuilder.ToString(), Encoding.UTF8));

When developing a Source Generator you have to restart Visual Studio to get rid of the red squiggles and some other undefined errors. But once a generator is registered it works pretty ok. As soon as you add a property in your appsettings.json our Source Generator kicks in and generates the new sources. The are almost immediately available in Visual Studio:

The generated configuration classes are available in the ApplicationConfig namespace. The main-class is MyAppConfig. The nested classes are in a different namespace: ApplicationConfigurationSections.

After updating / creating your appsettings.json file you can register your configuration in your Startup with the following code (it could require a restart before intellisense kicks in):

 services.Configure<ApplicationConfig.MyAppConfig>(Configuration);
 services.Configure<ApplicationConfigurationSections.Logging>(Configuration.GetSection(nameof(ApplicationConfigurationSections.Logging)));

You can find the full source code on GitHub.

Some things to take into account:

  • This is an example and I’m not sure yet how useful this is. Consider it pre-alpha.
  • It supports int’s, booleans, arrays and strings (and nested objects)
  • Source Generators are in preview. Things may change.
  • I might turn it into a NuGet package if it turns out to be useful

Integrating Blazor WebAssembly with Stripe

All the code is available here: https://github.com/albertromkes/StripeIntegration

For a personal project, build with Blazor WebAssembly I needed an integration with a payment provider. Since stripe seems to be a perfect choice and integrates with everything, I decided to give it a go. I never actually build an integration with such a service, but I was pretty sure someone on the big internet already did. After a couple of searches I seriously began to doubt my google-fu. I couldn’t find any decent article / how-to on how to integrate Blazor WebAssembly with stripe. This was one of the first times the internet let me down. So, instead of using only 2 keys of my keyboard, I now had to use most of them to get this going.

The integration consists roughly of 3 parts:

The main components are:

  • Client
    • Written in Blazor WebAssembly
  • Server
  • Stripe
    • The Stripe payment-backend.

Client

In order to make a payment your client (browser) needs to communicate with your own backend and with the Stripe servers. (See the image above for an overview of the flows involved)

Create a new Blazor WebAssembly project hosted in .NET Core:

Don’t forget to check the ‘ASP.NET Core hosted’ option.

I’m totally not focusing on design in this post. I don’t want to trigger someones OCD. Next thing we need to do is install some dependencies. I’m gonna use blazor-fluxor to make it easy to handle async requests. So let’s install it with:

Install-Package Blazor.Fluxor -Version 1.4.0

Next, add some boilerplate to get Fluxor going:

  1. In the Client project open the file App.razor
  2. At the top of the file add @inject Blazor.Fluxor.IStore Store
  3. Then add @Store.Initialize() – This will initialize the store and inject any required Javascript
  4. Edit the /wwwroot/index.html file.
  5. Above the <script> reference to blazor.webassembly.js file add the following script reference.
<script src="_content/Blazor.Fluxor/index.js"></script>

Next, add the DI part of Fluxor to Program.cs:

  1. In the Client project find the Startup.cs file.
  2. Add using Blazor.Fluxor;
  3. Change the ConfigureServices method to add Fluxor
public void ConfigureServices(IServiceCollection services)
{
	builder.Services.AddFluxor(options => options.UseDependencyInjection(typeof(Program).Assembly));
});

(Taken from: https://github.com/mrpmorris/blazor-fluxor/tree/master/samples/01-CounterSample)

Now we’re ready to get going with Fluxor. Let’s add a Store folder in the Client project and create some classes. Fluxor needs quite some boilerplate code. I’m not sure how I feel about that, but for now let’s put our doubts aside and get this integration integrating…

We need the following classes (see the GitHub repo for their implementations)

  • PaymentState
    • Responsible for holding the state of the payment
  • PaymentFeature
    • Fluxor concept. This uses the state and sets the initial state
  • InitiatePaymentAction
    • Description of the action that’s being fired
  • InitiatePaymentActionReducer
    • Respond to the InitiatePaymentAction and update the state accordingly

Next up we need a button to initiate the payment. Create a new page called StartPayment.cshtml in the Pages folder. (I’m using the Out Of The Box template from Visual Studio). This page is responsible for starting the payment. The implementation looks like this:

@page "/startpayment"
@inherits Blazor.Fluxor.Components.FluxorComponent
@inject IJSRuntime JSRuntime
@using Blazor.Fluxor
@using StripeIntegration.Client.Store
@inject IDispatcher Dispatcher
@inject IState<PaymentState> PaymentState

<h3>StartPayment</h3>

@if (PaymentState.Value.IsLoading)
{
    <p>Loading...</p>
}

@if (PaymentState.Value.ErrorMessage != null)
{
    <p>Errors: @PaymentState.Value.ErrorMessage</p>
}

@if (PaymentState.Value.Token != null)
{
    <p>Token: @PaymentState.Value.Token</p>


}

<button @onclick="StartPaymentClick">Start payment!</button>

@code {
    private async Task StartPaymentClick()
    {
        Dispatcher.Dispatch(new InitiatePaymentAction());
    }

    protected override async Task OnAfterRenderAsync(bool firstRender)
    {        
        if (!firstRender)
        {            
            if (PaymentState.Value.Token != null)
            {
                await JSRuntime.InvokeAsync<string>("stripeCheckout", null, 1, PaymentState.Value.Token);
            }
        }
    }
}

Let’s explain this page:

The @page, @inherits, @using and @inject are nothing new. Note the Fluxor specific usings and injectings (…)
Next, we’re checking the state values. Nothing too fancy. If the user clicks the Start payment! button, Fluxor kicks in and fires the InitiatePaymentAction. Blazor automagically updates the states and re-renders itself. The result is that the Loading... text appears as we’re starting the payment.

On a side note: don’t forget to update the usings in _Imports.razor. You should add your newly created classes:

@using StripeIntegration.Client
@using StripeIntegration.Client.Shared
@using StripeIntegration.Client.Store

Server

Now, let’s switch to the server. The client is more or less done, although there is one thing that we have to fix, but let’s do that after we’ve shizzled the serverside.

When the user clicks on the button to start the payment (think: shopping-cart) a request is fired to our own .NET Core server. Let’s make that a call to /api/startpayment. This endpoint is responsible for starting the paymentflow by creating a payment-session with Stripe. So, let’s create this controller and name it StartPaymentController:

namespace StripeIntegration.Server.Controllers
{
    [ApiController]
    [Route("[controller]")]    
    public class StartPaymentController : ControllerBase
    {
        [HttpGet]
        public async Task<IActionResult> Get()
        {
            StripeConfiguration.ApiKey = "YourSecretStripeApiKey"; //Get it from your stripe dashboard

            var options = new SessionCreateOptions
            {
                PaymentMethodTypes = new List<string>
                {
                    "card",
                    "ideal"
                },                
                LineItems = new List<SessionLineItemOptions>
                {
                    new SessionLineItemOptions
                    {
                        Name = $"Pants with 3 legs",
                        Description = $"Pants for those who have 3 legs",
                        Amount = 100, // 1 euro
                        Currency = "eur",
                        Quantity = 1
                    }
                },
                SuccessUrl = "https://localhost:5001/success?session_id={CHECKOUT_SESSION_ID}",
                CancelUrl = "https://localhost:5001/failed"
            };

            var service = new SessionService();
            Session session = await service.CreateAsync(options);
            return Ok(session.Id);
        }
    }
}

This controller will start the paymentsession with Stripe and returns a session-id to the frontend. That’s the Token from the PaymentState class in the /startpayment page!
It’s returned to this page and from there on we use this token to open the stripe checkout page. So, let’s switch back to the client because we need 1 more thing overthere.

Client (again)

Communicating with stripe requires some JavaScript magic. Since we are using Blazor on the client, we need to interop with this JavaScript.
Stripe needs 3 pieces of JavaScript:

  1. The stripe.js file. *Only* load it from stripe.js!
  2. A stripe object with a publishable key (get it from your stripe dashboard)
  3. A method that uses our Token from the server to start the stripe checkout session.

So, first things first and add the stripe.js JavaScript file in the wwwroot/index.html page. Put it in the <head>section:

https://js.stripe.com/v3/

Next, also in the <head> section we need to create a stripe object using the publishable key from stripe:

<script>
        var stripe = window.Stripe('pk_test_publishablekeyfromstripe');

        window.stripeCheckout = function (callBackInstance, amount, token) {
            stripe.redirectToCheckout({
                sessionId: token
            }).then(function (result) {
               // up to you
            })           
        };
</script>

As you can see this stripeCheckout method needs a token as input to start the checkout session. Let’s get this sorted! Well, actually we’ve already fixed this. Checkout the /startpayment page. There’s is a method called protected override async Task OnAfterRenderAsync(bool firstRender)This method calls the stripeCheckout method once the token is handed to it (via the Blazor update-mechanism). This call to stripeCheckout happens via the JSRuntime helper method to interop with the normal’ JavaScript from stripe.

But before this all works we need some more boilerplate on the client. Add the following classes to the Store folder and checkout their implementation in the GitHub repo:

  • StartPaymentEffect
    • Responsible for calling your own endpoint via a HttpClient
  • StartPaymentSuccessAction
    • Action to dispatch when the call to our server succeeds
  • StartPaymentSuccessActionReducer
    • Responsible for setting the state correctly when the call to our server succeeds. So, this class sets the token in the PaymentState
  • StartPaymentFailedAction
    • Action to dispatch when the call to our server fails

Also, don’t forget to add a success- and a failed page to your application. Whenever a customer cancels the checkout, stripe will redirect your customer to the failed-url. You configure it in your API on your own server. Same is true for the success page: whenever the checkout succeeds, stripe will redirect your customer to your success page.

And that’s it! You’ve got yourself a working integration with stripe checkout and Blazor WebAssembly.

Please checkout the GitHub repo and try it yourself. Don’t forget to replace all the tokens!

Used tools and versions

  • Visual Studio Professional 2019 Preview – Version 16.5.0 Preview 2.0
  • Blazor template: dotnet new -i Microsoft.AspNetCore.Blazor.Templates::3.2.0-preview1.20073.1
  • .NET Core 3.1.200-preview-014883
  • Blazor.Fluxor – v1.4.0
  • Stripe.net – v34.20.0

NuGet Package for DD4T and Experience Manager

Today I released a NuGet package for Experience Manager and DD4T (.NET). It allows a developer to easily add the required MarkUp to his (View)Models to enable the inline editing features of the Experience Manager from SDL Tridion. Only use this package if you use the DD4T .NET framework!

Install the package using the package explorer:

     Install-Package DD4T.XPM

The installer automatically adds 2 files to the root of your MVC  WebApplication: SiteEdit_config.xml and RegionConfiguration.xml
It also updates the web.config in the ‘Views’ folder to use the DD4T.XPM.XpmWebViewPage as pageBaseType and includes
the DD4T.XPM.HtmlHelpers namespace. After installing the package it’s recommended to restart Visual Studio.

How to use

1) Decorate your Models with the XPM Attributes:


[InlineEditable]
public class ArticleModel
{
[InlineEditableField(FieldName="title")]
public string Title { get; set; }
[InlineEditableField(FieldName="complink_to_article")]
public HyperLink LinkToFullArticle { get; set; }
[InlineEditableField(FieldName="related_articles")]
public List<HyperLink> RelatedArticles { get; set; }
[InlineEditableField(FieldName="publish_date")]
public DateTime PublishDate { get; set; }
[InlineEditableField(FieldName="prio")]
public int Priority { get; set; }
}
[InlineEditable]
public class HyperLink
{
[InlineEditableField(FieldName="link_to")]
public string Url { get; set; }
[InlineEditableField(FieldName="link_title")]
public string LinkTitle { get; set; }
}

view raw

gistfile1.cs

hosted with ❤ by GitHub

2) Create your model and call the ‘MakeInlineEditable’ method


var model = new ArticleModel();
new XpmActions().MakeInlineEditable<ArticleModel>(model, tridionComponentPresentation);
return model;

view raw

gistfile1.cs

hosted with ❤ by GitHub

3) Use the DD4T.XPM helpers to write out the value in the View


@*Start of Component Presentation in View*@
@XPM.StartInlineEditingZone()
@* or for a 'submodel' (Component Presentation in a Component Presentation) *@
@XPM.StartInlineEditingZone(Model.Teaser)
@*Write out MarkUp and value*@
<h2>@XPM.Editable(m => m.Title)</h2>
@*Write out MarkUp and value separately*@
<h2>@XPM.MarkUp(m => m.Title) @Model.Title</h2>
@*Lists*@
@for (int i = 0; i < Model.RelatedArticles.Count; i++)
{
<li>@XPM.Editable(m => m.RelatedArticles[i].LinkTitle</li>
}
@*Region*@
<div>
@XPM.RegionMarkup("PageHeader")
</div>

view raw

gistfile1.cs

hosted with ❤ by GitHub

That’s all.

Regions

Regions are configured in the file ‘RegionConfiguration.xml’ in the root of your webapplication. This file is added by the NuGet installer. In your view you can use the following call to writeout the region MarkUp:

@XPM.RegionMarkup("PageHeader")

PageHeader is the ID of the region as configured in the RegionConfiguration.

Final notes

The NuGet installer adds the ‘SiteEdit_config.xml’ file to the root of your project. If this file is present, the XPM helper methods will write out the MarkUp (If you called ‘MakeInlineEditable’). If this file is not present, the helpers don’t output the MarkUp. Just the value. Of course you want to control the  call to ‘MakeInlineEditable’ based on the environment you’re in: only call this in staging!

This package is developed with .NET 4.5.1 and NuGet version 2.7. I did *not* test it with other .NET frameworks, but I assume it just works.

Happy coding and let me know if you run into issues

Custom Resolvers and Configuration

While working on a Custom Resolver, I needed to grab some configuration values. This seems fairly straightforward, and the documentation from SDL Tridion covers this. It states that we have to add a ConfigurationSection to the ‘Tridion.ContentManager.config’ file and that we can read these values using the following code:

private string SCHEMA_TITLES = 
Config.GetConfig("My.Tridion.CustomResolving", "schemaTitles");

It’s unclear where ‘Config.GetConfig’ comes from, but there’s more. The SDL Tridion Content Manager uses different services to resolve items to be published. The following SDL Tridion services use your Custom Resolver:

– TcmServiceHost
– TcmPublisher

The TcmServiceHost calls the resolver when a user clicks on the ‘Show Items to publish’ button in the ‘Publish’ popup.
The TcmPublisher calls the resolver when the item is actually published.
Both services have their own executable and their own configuration: TcmServiceHost.exe.config and TcmPublisher.exe.config (Located in the %Tridion_Home%\bin directory)

So, after adding the configuration for our custom resolver to the Tridion.ContentManager.config file I hooked up the debugger to the TcmPublisher and clicked ‘Publish’: no configuration values were found. Which makes perfect sense, since the TcmPublisher.exe uses the TcmPublisher.exe.config as its configuration source. The same is true for the TcmServiceHost: it uses the TcmServiceHost.exe.config as its configuration source.

How to solve this configuration issue?

Well, luckily both config files have a reference to the ‘Tridion.ContentManager.config’ file: (All Tridion Content Manager executables/services have a reference to this config file)

 <tridionConfigSections>
    <sections>
      <clear />
      <add filePath="D:\Program Files (x86)\Tridion\config\Tridion.ContentManager.config" />
      <add name="loggingConfiguration" />
    </sections>
  </tridionConfigSections>

So now, in your Custom Resolver it’s nothing more then loading the Tridion.ContentManager.config file to get our custom resolver configuration value(s):


Tridion.Configuration.ConfigurationSections tcmConfigSections = (Tridion.Configuration.ConfigurationSections)ConfigurationManager.GetSection(Tridion.Configuration.ConfigurationSections.SectionName);
var tcmSectionElem = tcmConfigSections.Sections.Cast<Tridion.Configuration.SectionElement>().FirstOrDefault(s => !string.IsNullOrEmpty(s.FilePath) && s.FilePath.EndsWith("tridion.contentmanager.config", StringComparison.InvariantCultureIgnoreCase));
if(tcmSectionElem != null)
{
var tcmConfigFilePath = tcmSectionElem.FilePath;
//load Tridion.ContentManager.config
ExeConfigurationFileMap map = new ExeConfigurationFileMap { ExeConfigFilename = tcmConfigFilePath };
var config = ConfigurationManager.OpenMappedExeConfiguration(map, ConfigurationUserLevel.None);
var myCustomResolverSettings = ((AppSettingsSection)config.GetSection("My.Tridion.CustomResolving")).Settings;
var schemaTitles = myCustomResolverSettings["schemaTitles"].Value.ToString();
}

view raw

gistfile1.cs

hosted with ❤ by GitHub

The configuration in the Tridion.ContentManager.config’ is as follows (shortened):

<section name="My.Tridion.CustomResolving" type="System.Configuration.AppSettingsSection" />
...
<My.Tridion.CustomResolving>
	<add key="schemaTitles" value="FullArticleSchema" />
</My.Tridion.CustomResolving>

The type of the ConfigurationSection is ‘AppSettingsSection’. This is different from the documentation, but that doesn’t matter.
You can insert whatever section you like, as long as you update the code to get the ConfigurationSection. (Cast it to the correct type)

Have fun!

How to detect if you are in the Editor view in XPM (serverside)

While working on an Experience Manager implementation, you often find yourself (at least I do) in the position where you want to change/update the HTML that is generated to be able to edit the content nicely in Experience Manager.
XPM uses HTML-comments to ‘mark’ fields as being editable. The XPM JavaScript draws a border around such a field to highlight it, so the editor knows he can edit this field.
The JavaScript from XPM uses the nearest HTML container (<div>, <h1>, etc) to draw this border.

But often enough the HTML doesn’t fit XPM. The border is drawn to big, too small or doesn’t show up at all because there is no ‘fitting’ HTML element. Or your property doesn’t have a visual representation. Think for instance video parameters, or metadata. Ideally you want your editors to be able to edit these properties in the Experience Manager view.

The most ideal scenario is that the developer somehow can detect if the current page/dcp is rendered in the XPM editor view (and NOT just in the ‘normal’ staging website). Knowing that the current page is rendered in the XPM editor view, allows the developer to add additional HTML to create a more customized user experience for the editor.

There are several solutions to detect the state (in XPM editor view) clientside (using JavaScript), but I haven’t seen a solution for detecting it serverside. And because I needed it in my current project and saw that several people asked for it, I decided to try to build a solution and ‘put it out there’.

This is how it works in a nutshell

  1. XPM loads the staging website in an <iframe>
  2. A GUI extension (1 JavaScript with very few lines of code) is loaded
  3. This JavaScript contains a method that runs just before the page is loaded into the iframe
  4. It updates the <iframe> ‘src’ attribute and adds a querystring parameter calld ‘ActiveInXpm=true’
  5. Another line of code takes care of the ‘Exit’ button functionality (makes sure that you are redirected back to the original URL with a querystring parameter ‘ExitXpmEditor=true’)

I know it’s a *very* simple extension, but for me it does the job. In my (DD4T .NET ) website I check this querystring parameter and that’s how I know that the current page is loaded in the XPM editor view. (Getting the Querystring parameter is a trivial task in every serverside language).

While this works for the first page that is opened in the XPM Editor, it doesn’t work when the editor navigates to the following url while staying in the Experience Manager. This all happens within the <iframe> and I couldn’t find a reliable way to add the querystring parameter to each request. If you do, please let me know 🙂

I solved this by setting the ‘ActiveInXpm’ querystring parameter value in the session. And my code to check if I am in the Editor view, checks the session. If the user exits the XPM editor by clicking on the ‘Exit’ button, the session is emptied and the page looks exactly how it would look like on the live site. The session handling functionality is only added to the staging website, so the live website *never* has to deal with XPM stuff.

I build this for a DD4T .NET site. If you want the code for handling the session, drop me an email and I’ll send it to you.

I’ve uploaded the extension to GitHub.

Let me know what you think or if you have any problems with it.

Some useful links for more information about detecting the XPM state clientside:

Domain Driven Development with DD4T and XPM

What a title that is 🙂

DD4T stands for Dynamic Delivery For Tridion and is a leightweight ASP.NET MVC framework build on top of the SDL Tridion stack. It’s opensource and you can find more about it here

XPM is the WYSIWYG editor (and much more!) that ships with SDL Tridion.

Domain Driven Development is… well, Wikipedia explains it better than I can, so check it out!

I am a big fan of the MVC framework from Microsoft. No wonder I also love the DD4T framework as it makes building MVC websites with SDL Tridion a LOT easier.
One of the shining features of SDL Tridion is its recently upgraded WYSIWYG editor (Or Experience Manager) that allows editors to edit the content of the website in the context of the website itself, in the browser.
This is a great feature and makes it very easy to adjust content in a natural way.

Of course, before content editors can use the Experience Manager (XPM from now on) the SDL Tridion consultant has to pull some triggers to make this possible. All relatively easy to do.

But with DD4T it’s not so straight-forward as one would want. And especially if you are doing (a form of) Domain Driven Development, and thus are using (domain)ViewModels.

Before you read on, I highly recommend that you read Kah Tang’s article on ViewModels in DD4T first. This is how I usually implement DD4T and helps you understand the problem we are trying to solve in this post.

I’ve see a few DD4T implementations, and most of them use the OOTB DD4T models as their ViewModels. (Don’t know what ViewModels are? See: http://stackoverflow.com/a/11064362/1221887)

For example, consider the following razor View (which also renders the XPM markup):


@model DD4T.ContentModel.IComponent
@foreach (IFieldSet fieldset in Model.Fields["RelatedLinks"].EmbeddedValues)
{
<div class="cd-list">@if (fieldset.ContainsKey("Link") || fieldset.ContainsKey("ExternalLink"))
{
<a class="cd-link-ext" title="" href="@fieldset[" target="_blank">
@Html.SiteEditField(Model, fieldset["Image"])
@fieldset["Image"].GetImage("img")
</a>
}
</div>
}

view raw

gistfile1.cs

hosted with ❤ by GitHub

As you can see this View uses the DD4T ‘IComponent’ as it’s ViewModel. And it is using the OOTB DD4T ‘SiteEditField’ HtmlHelper to generate the XPM markup.
While using the IComponent as a ViewModel is valid, it’s not as nice and clean as it could be. Also, the developer has to know the name of the field in Tridion and there’s no compile time checking. While it’s a valid approach, it doesn’t leverage all the advantages of the MVC framework (for example: no strongly typed views).

I always use (domain) specific ViewModel’s. Using your own, domain specific ViewModels has many advantages from which ‘separation of concerns’ and intellisense are just two of them (IMHO).
ViewModels only purpose is to display the Model in a certain way. Sometimes they are (almost) identical to your Domain Model, but a ViewModel can/must have extra properties to make displaying it possible. It results in much clear views.

Consider the following example (not rendering the XPM markup):


@model ArticleViewModel
@foreach(var relatedLink in Model.RelatedLinks)
{
<div class="cd-list">
<a class="cd-link-ext" title="@relatedLink.Title" href="@relatedLink.Url" target="_blank">
@Html.RenderImage(relatedLink)
</a>
</div>
}

view raw

gistfile1.cs

hosted with ❤ by GitHub

As you can see it’s much cleaner and easier to read then the previous View and there is also no logic (checking, etc) involved. (I also could have used an HtmlHelper to write out the tag. It’s up to you.)

But what if your customer asked you to implement XPM? You cannot use the OOTB Html helper since your ViewModel doesn’t have the properties this helper expects.
Well, there are a few options.

1. Add the ‘IComponent’ as a complex property to your ViewModel.
This would look something like this:


@model ArticleViewModel
@{
int counter = 0;
}
@foreach(var relatedLink in Model.RelatedLinks)
{
<div class="cd-list">
@Html.SiteEditField(Model.TridionComponent.Fields["hyperlinks"][counter])
<a href="@relatedLink.Url" title="@relatedLink.Title" class="cd-link-ext" target="_blank">
@Html.SiteEditField(Model.TridionComponent.Fields["Image"][0])
@Html.RenderImage(relatedLink)
</a>
</div>
couner++;
}

view raw

gistfile1.cs

hosted with ❤ by GitHub

It still requires the developer to know the names in Tridion and it’s lacking the strongly typed advantages. (Intellisense)

2. Add the XMP MarkUp as separate properties to your ViewModel:


@model ArticleViewModel
@foreach(var relatedLink in Model.RelatedLinks)
{
<div class="cd-list">
@relatedLink.XPMMarkUp
<a href="@relatedLink.Url" title="@relatedLink.Title" class="cd-link-ext" target="_blank">
@relatedLink.Image.XPMMarkUp
@Html.RenderImage(relatedLink)
</a>
</div>
}

view raw

gistfile1.cs

hosted with ❤ by GitHub

This already looks cleaner and has intellisense. But I don’t like the added properties to the ViewModel. It clutters the ViewModel.

I struggled with this issue for quite some time. But after trying the above mentioned approaches I wasn’t happy with the result. Although it works, it isn’t as nice, clean and intuitive (for a programmer) as it could be.

After spending much time on it I came up with an approach. This approach involved quite some coding, but it’s for a good cause right? And I liked doing it, because I’ve learned a lot of new stuff.

I wanted it to be a generic solution, so everyone using DD4T could use it. This is how it looks like:
(It’s still a work in progress!)

1. Create your ViewModel and decorate it with attributes.
Example:


[InlineEditable]
public class Article : IArticle
{
[InlineEditableField(FieldName="title")]
public string Title {get;set;}
[InlineEditableField(FieldName="short_intro")]
public string Summary {get;set;}
private IList<string> _relatedLinks = new List<string>();
[InlineEditableField(FieldName="related_links")]
public IList<string> RelatedLinks
{
get
{
return _relatedLinks;
}
set
{
_relatedLinks = value;
}
}
}

view raw

gistfile1.cs

hosted with ❤ by GitHub

As you can see there are 2 new Attributes involved.

  • InlineEditable
  • InlineEditableField

The first attribute ‘InlineEditable’ marks the class (ViewModel) as inline-editable with XPM. The second attribute marks a single field as inline-editable with XPM.

Of course this is not everything. Once you created your ViewModel, you have to make a call to a method to do the magic. Since all my ViewModel’s by default are created inside a ModelFactory (a subject for a different post), I made this functionality part of a base-class, but you can implement it any way.
This is how my (simplified) ViewModel builder looks like:


public class ArticleBuilder : BuilderBase
{
private ComponentPresentation TridionComponentPresentation { get; set; }
public ArticleBuilder(ComponentPresentation componentPresentation)
{
TridionComponentPresentation = componentPresentation;
}
public Article Build()
{
var tridionComponent = TridionComponentPresentation.Component;
var articleViewModel = new Article
{
Title = tridionComponent.Fields["title"].Value,
Summary = ResolveRichText(tridionComponent.Fields["short_intro"].Value),
RelatedLinks = tridionComponent.Fields["related_links"].Values.ToList()
//Etc
}
//Work magic for XPM
MakeInlineEditable(articleViewModel);
return articleViewModel;
}
}

view raw

gistfile1.cs

hosted with ❤ by GitHub

That’s all. The article is now ready for XPM. It’s not yet inline editable, but the information from Tridion is added to the class, so we can use it in our View.

Let see how our View would look like if we would make this ViewModel (inline)editable with XPM:


@model ArticleViewModel
<div class="middleContent">
@XPM.StartInlineEditingZone()
<h2>@XPM.Editable(m => m.Title)</h2>
<div class="intro">
@XPM.Editable(m => m.Summary)
</div>
<ul>
@foreach(var relatedLink in Model.RelatedLinks)
{
<li>
@XPM.Editable(m => m.RelatedLinks, relatedLink)
</li>
}
</ul>
</div>

view raw

gistfile1.cs

hosted with ❤ by GitHub

As you can see we have full intellisense and a nice and clean View. Of course, I simplified the example a little, but it proves a point.

The XPM Helper and it’s ‘Editable’ method write out the actual value of the property and its corresponding XPM MarkUp.
There’s also a ‘MarkUp’ method that just write’s out the XPM MarkUp. This becomes handy when you want to make an image or hyperlink inline-editable:


<div>
@XPM.MarkUp(m => m.ImageUrl)
<img src="@Model.ImageUrl" />
</div>

view raw

gistfile1.cs

hosted with ❤ by GitHub

All this is not yet part of DD4T, but I am planning on integrating it in the framework, as I see it as a valuable addition. (If not: let me know).
It encourages the use of (domain)ViewModel’s and results in a cleaner solution.

If you want to the XPM helper in you project now, drop me an email and I will send you the source-code and the instructions on how to set it up. In the end there’s really not much to it, but isn’t that the case with all challenges?

Troubleshooting the SDL Tridion Experience Manager with Session Preview

In the past week I had the opportunity to install the Experience Manager with Session Preview on a completely
DD4T and SDL Tridion driven website. Configuring the Experience Manager can be quite painful. Especially if you don’t know how Session Preview (exactly) works and if you have no clue where to start and where to look.

In this post I want to give you some hooks and pointers on where to look if things get interesting 🙂
In fact, if you are DESPERATE about why your Session Preview isn’t working, this post is aimed at you!

But first: thanks to Andrew Marchuk, Daniel and Likhan from SDL Tridion for helping me. Without their help I would still be staring at my screen 🙂

Well, let’s start!

(I’ll assume you have a basic understanding of SDL Tridion).

First, read this answer and the comments: http://stackoverflow.com/questions/10788508/continously-update-preview-alert-on-sdl-tridion-ui-2012/10802033#10802033

Meditate it, let it sink, adjust your setup and try again.

Now, if it still doesn’t work, read on:

First, turn of caching for your website. Just to be sure. After you got Session Preview working, turn on caching again and see what happens. But for troubleshooting the Session Preview I recommend to turn of caching completely. Just to be sure…

1. Do a basic sanity check and check the following for your staging website:

– Open the cd_storage_conf.xml from your staging website and ensure that:

  • The
    <Wrapper> 

    element exists! Like this:

  • 
    <Wrappers>
    				<Wrapper Name="SessionWrapper">
    					<Timeout>120000</Timeout>
    					<Storage Type="persistence" Id="db-session-webservice" dialect="MSSQL"
    					Class="com.tridion.storage.persistence.JPADAOFactory">
    					<Pool Type="jdbc" Size="5" MonitorInterval="60" 
    					IdleTimeout="120" CheckoutTimeout="120" />
    					<DataSource Class="com.microsoft.sqlserver.jdbc.SQLServerDataSource">
    						<Property Name="serverName" Value="WIN-1CJUK3HE34H" />
    						<Property Name="portNumber" Value="1433" />
    						<Property Name="databaseName" Value="Tridion_SessionPreview" />
    						<Property Name="user" Value="TridionBrokerUser" />
    						<Property Name="password" Value="PassWord" />
    					</DataSource>
    					</Storage>
    				</Wrapper>
    			</Wrappers>
    
    

    Of course it should point to your SESSION PREVIEW database! Not to your default, ordinary Broker database.

  • Check if at least the following StorageBinding is present in the cd_storage_conf.xml of your website:
  • <StorageBindings>			
                    <Bundle src="preview_dao_bundle.xml"/>				          
                </StorageBindings>   
    
  • Check if you added the AmbientData HttpModule (for .NET Sites!. For java it’s probably a filter) in the web.config of your website:
  • <add type="Tridion.ContentDelivery.AmbientData.HttpModule" name="AmbientFrameworkModule" preCondition="managedHandler" />
    
  • If you are on a website that is NOT COMPLETELY DYNAMIC (so on a website that’s NOT ON DD4T) check if you added the following module in the web.config of your staging website:
  • <add name="PreviewContentModule" type="Tridion.ContentDelivery.Preview.Web.PreviewContentModule" />
    

    Again: this module is NOT, I repeat NOT necessary if your website is a completely dynamic website. (e.g. retrieves everything from the broker like DD4T). If you still use this module, you will see that clicking on ‘Update Preview’ will generate files on the filesystem! And it will not show you the updated preview!

  • Open the cd_ambient_conf.xml file of your Staging website and check if the following Cartridge is referenced:
    <Cartridge File="cd_webservice_preview_cartridge.xml"/>
    

2. Check the following for your OData Webservice: (The one that is used by the Session Preview, so the one you configured as the ‘Content Delivery Endpoint Url’ on your Publication Target)

  • Copy/paste this ‘Content Delivery Endpoint Url’ and paste it into your browser. (Of course inside the company domain…) and see if it responds.
  • The url looks like this: http://localhost:73/odata.svc/
    You should get a response with a listing of all collections that can be retrieved by this OData endpoint. Something along the lines of this:

    
    <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
    <service xml:base="http://localhost:73/odata.svc/" xmlns="http://www.w3.org/2007/app" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:edmx="http://schemas.microsoft.com/ado/2007/06/edmx" xmlns:d="http://schemas.microsoft.com/ado/2007/08/dataservices" xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata">
        <workspace>
            <atom:title>Default</atom:title>
            <collection href="Binaries">
                <atom:title>Binaries</atom:title>
            </collection>
            <collection href="BinaryVariants">
                <atom:title>BinaryVariants</atom:title>
            </collection>
      	....
    	....
        </workspace>
    </service>
    
  • Open the cd_storage_conf.xml of your OData webservice and ensure that:
    • The Wrapper tag is there and is pointing to your SESSION PREVIEW database. So not to your regular Broker Database!
    • The StorageBinding with the preview dao bundle is there. Like this:
      <StorageBindings>			
                      <Bundle src="preview_dao_bundle.xml"/>				          
                  </StorageBindings>   
      
  • Open the cd_ambient_conf of your OData webservice and verify that:
    • The preview Cartridge is there. Like this:
      <Cartridge File="cd_webservice_preview_cartridge.xml"/>
      

Now, do an IISReset on your website and for your OData webservice. This makes sure that your changes to the various configuration files are used when the Content Delivery Instance boots up the first time. DO NOT SKIP THIS STEP! In case you are still in doubt: DO NOT SKIP THIS STEP! (Sorry for shouting)

Now, hit ‘Update Preview’ again. If it is still not working for whatever reason keep reading:

1. Open the logback.xml file of your Staging website, and set the loglevel to ‘VERBOSE’.
2. Open the logback.xml file of your OData webservice and set the loglevel to ‘VERBOSE’.
3. Clear both logfiles! (So you have a fresh start)
4. Clear the ‘Tridion’, ‘Tridion Content Manager’ and ‘Application’ Windows Eventlogs on the Content Manager Server
5. Clear the ‘Application’ Windows Eventlog on the Staging WebSite server
6. Clear the ‘Application’ Windows Eventlog on the Odata webservice server

Do an IISReset (You edited the logback.xml file, so this is necessary!)

Now, hit ‘Update Preview’ again and check out the logfiles in this order:

  • cd_core.log of your Staging website
    -> Anything unusual? Especially error’s and warnings with regard to the Ambient Data Framework are important! Take them seriously and double check the cd_ambient_conf.xml and the cd_storage_conf.xml of your staging Website. Also, check if all HttpModules and/filters are present in the Web.config of your website! (See above)
  • cd_core.log of your OData website. If this file is (almost) empty that means that the ‘Update Preview’ request NEVER reached the OData webservice. This could be due to:
    – Network issues: are the IIS Bindings of the OData webservice correct?
    – Can you connect to the OData webservice using your browser?
    – Is your publication target pointing to the correct Content Delivery Endpoint Url (your OData webservice)?
  • If there is data in the cd_core.log of your OData webservice, check to see if there are error’s or unusual statements.

    • If you search for your adjusted content do you see it? If so, this means that your changed content is correctly send to the OData webservice. If not, that means that your staging website cannot connect to the OData webservice. Again: Check IIS settings and network settings.
  • Open the Session Preview Database using SQL Server Management Studio, and open the table ‘Component Presentations’. After you hit ‘Update Preview’, you SHOULD see something added to this table. If not: check if you referenced the correct Session Preview Broker Database in BOTH of your Wrappers. (In the cd_storage_conf.xml of your Website and in the cd_storage_conf.xml of your OData webservice!)

If you see an HTTP error 400 BAD REQUEST after you click on ‘Update Preview’ check the following:

  • Open the Windows EventLog ‘Tridion Content Manager’ on the Content Manager server and check if you see the same error here.

If so, try the following:

Stop the TcmServiceHost windows service on the Content Manager Server. (Be careful, The SDL Tridion Content Manager stops working now!)
Next, browse to the SDL Tridion install directory\bin with the command prompt and start the TcmServiceHost.exe with the -debug command. Like this:

TcmServiceHost.exe -debug

Now, open Fiddler on the Content Manager server, apply a filter to show only traffic from the TcmServiceHost and hit ‘Update Preview’ again. Now you have the request and you can inspect it to see if there’s anything unusual. E.g. the Content-Lenght is 0. That’s weird, because that means no data was send!

The last resort consist of tracing everything related to the OData webservice. If everything above failed, do the following:

Open the Web.config of the OData webservice and add the following code:

<system.diagnostics>
    <trace autoflush="true" />
    <sources>
      <source name="System.ServiceModel" switchValue="All">
        <listeners>
          <add name="TraceListeners" type="System.Diagnostics.XmlWriterTraceListener" initializeData="C:\Temp\trace.svclog" />
        </listeners>
      </source>
    </sources>
  </system.diagnostics>

Adjust the ‘initializeData’ path to a path of your choosing.

Now, hit ‘Update Preview’ again, and after it’s finished, open the trace by double-clicking on it. (If you don’t have the tracetool, download it here)

Find he first red colored entry, and inspect the error message. In my case the ‘maxReceivedMessageSize’ of the OData webservice was too small.

You can adjust this setting in the Web.config of the OData webservice. Here is an example of the updated part of the Web.config:

<system.serviceModel>
    <serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="True" />
    <behaviors>
      <endpointBehaviors>
        <behavior name="webHttp" >
          <webHttp/>
        </behavior>
      </endpointBehaviors>
    </behaviors>
    <services>
	  <!-- HTTP support. In case of HTTPS these services have to be commented. -->	
      <service name="Tridion.ContentDelivery.Webservice.ODataService">
        <endpoint  behaviorConfiguration="webHttp" binding="webHttpBinding" bindingConfiguration="AdustedBindingConfiguration" contract="Tridion.ContentDelivery.Webservice.IODataService" />
      </service>
      <service name="Tridion.ContentDelivery.Webservice.LinkingService">
        <endpoint behaviorConfiguration="webHttp" binding="webHttpBinding" contract="Tridion.ContentDelivery.Webservice.Ilinking" />
      </service>
	  <service name="Tridion.ContentDelivery.Webservice.AccessTokenService">
		<endpoint behaviorConfiguration="webHttp" binding="webHttpBinding" contract="Tridion.ContentDelivery.Webservice.IOAuth2AccessToken" />
	  </service>
    </services>
	<!-- In case of HTTPS support uncomment this block.
	-->
	<bindings>     	  
		<webHttpBinding>      
			<binding name="AdustedBindingConfiguration" maxReceivedMessageSize="2097152000" maxBufferSize="2097152000">
				<readerQuotas maxArrayLength="81920" maxBytesPerRead="5120" maxDepth="32" maxNameTableCharCount="81920" maxStringContentLength="2097152" />
			</binding>
	    </webHttpBinding>
    </bindings>
	
	
  </system.serviceModel>

Note that ‘maxReceivedMessageSize’ and ‘maxBufferSize’ should be the same!

Don’t forget to remove this settings once you resolved all your issues!

Phew! I *really* hope your Session Preview service is now working properly.
If this isn’t the case consider asking it on StackOverflow (and while you’re there, consider committing to the SDL Tridion Exchange Proposal)
The community is really helpful and very knowledgeable. Of course you can also open a support ticket with Customer Support.

Have fun!

DD4T and caching

In this post I will try to describe the caching options that are available to you, to increase the responsiveness and the performance of your dynamic website build on top of the Dynamic Delivery for Tridion (DD4T) framework.

In a website build with DD4T (almost) all content comes from the Broker Database. The content is stored as an XML string in the database and is transformed (de-serialized) into .NET objects at request time. As you can image this has a huge impact on the performance of your website: on every request the XML is loaded (streamed) from the database and the DD4T framework de-serializes this into usable .NET objects.
This is a time-consuming process and puts a heavy load on your webserver.

Luckily there are a few options/strategies to improve the performance of your website. And the beauty about these options is that they (almost) come for free!

Output Caching

The first option is the out-of-the-box Output caching from ASP.NET.
Just decorate your controller (PageController, ComponentController; your choice) with the OutputCache attribute and your done!

[OutputCache(CacheProfile = "ControllerCache")]
public override System.Web.Mvc.ActionResult Page(string pageId)
{
...
}

And in your web.config you configure the duration of your ControllerCache:

<caching>
<outputCacheSettings>
<outputCacheProfiles>
<add name="ControllerCache" duration="3600" varyByParam="*"/>
</outputCacheProfiles>
</outputCacheSettings>
</caching>

OutputCache caches the output (…) for the duration you configured in the web.config. If the cache duration is set to 5 minutes, and in these 5 minutes you publish a page, the changes are NOT reflected in your browser if you hit F5. Only after 5 minutes the cache is invalided and on the next request the XML is loaded from the Broker Database. And de-serialized.

DD4T Caching

Luckily for us DD4T ships with a build in cache mechanism. This caching-mechanism is build on top of the .NET System Runtime Cache and can be used in conjunction with the Output Cache.

DD4T caching stores de-serialized objects like Components and Pages in the .NET Runtime cache after they are requested for the first time. Every consecutive request for that page/component loads it from the Object cache instead of loading it from the Broker database and de-serializing it into a .NET objects.
As you can imagine, this causes a massive performance improvement.

But how does DD4T ‘know’ when to invalidate the item in the cache? Because if you re-publish a page or component, you want your website to show the updated page/component.
The fact is that DD4T never knows when an item is republished, unless it ‘asks’ SDL Tridion for it. (Due to the fact that a website is stateless)
Well, this ‘asking’ is implemented in DD4T.

DD4T poll’s every x seconds/minutes/hours/etc (configurable) if the LastPublishDate from an item in the cache has changed. If it has changed (the item was republished) it will invalidate this item. The next time this item is requested, it will be loaded from the Broker databases, de-serialized and stored in the cache.

To configure how often DD4T needs to check the LastPublishDate of the items in the cache, use this setting in your web.config (value must be in seconds)

 <add key="CacheSettings_CallBackInterval" value="30"/>

In this example, DD4T poll’s the Broker Database every 30 seconds to check if the items in the cache are still valid.

Also, after a configurable amount of time, the item is -no matter what- invalidated. The amount of time can be configured separately for pages and components.
Use the following configuration settings to accomplish this:

<add key="DD4T.CacheSettings.Page" value="3600"/>
<add key="DD4T.CacheSettings.Component" value="3600"/>

In this example all the pages and components in the cache are invalidated after 1 hour.

SDL Tridion Object cache

SDL Tridion comes with a caching solution called ‘Object cache’. To quote the documentation, this is what it does:

To improve the performance of your Web site, you can choose to store the most commonly used or resource-intensive objects from your Content Data Store in a cache. The cache keeps these objects available for the applications that request them, rather than reinitializing them each time they are requested.

Pretty obvious right? So no need to explain it further.
Read the documentation here. (Login required)

More information about the SDL Tridion Object cache:

Finale notes

As we have discussed, there are 3 caching options available out of the box. Used them when needed, and tweak them according to your needs.

A very nice post about caching (and the performance you gain) with a SDL Tridion driven website was written by Nuno Linhares. Read it here: Tridion Content Delivery and Caching

I hope I gave you some information about caching in a dynamic (DD4T driven) website on top of SDL Tridion to get you started.

Tridion GUI Extensions : How to load a JavaScript without showing a GUI element

A while ago I was struggling with the above mentioned challenge: I wanted to load some JavaScript into the Tridion Content Manager GUI, but without showing a corresponding GUI element (Button, list, etc).
I searched the online documentation portal, the good old forum, searched all the Tridion blogs, but could not find it.
With no other option left, I turned to the experts. Since not too long, they can also be found here.
(And while you’re there, why not join us?)

It took precisely 3 minutes and I had my answer. Since I could not find it, I assume you also cannot find it. That’s why I share it here.
But not without mentioning the one who gave me the answer: Frank. Thanks.

For adding a JavaScript to extend the Tridion Content Manager GUI, but without showing a button or list or whatever, the following configuration is needed:

1. Add the following configuration to your ‘Editor.config’ (config file to configure your GUI Extension):

<cfg:groups>
<cfg:group name="MyGroupName">
<cfg:domainmodel name="MyName">
<cfg:fileset>
<cfg:file type="script" id="MyId">/Relative/Path/MyJavaScript.js</cfg:file>
</cfg:fileset>
<cfg:services />
</cfg:domainmodel>
</cfg:group>
</cfg:groups>

This piece of configuration makes sure that your javascript file is loaded for the complete GUI. This might not be what you want.
Let’s say you only want to load your methods/classes for a certain view, let say the Component-edit screen. You can achieve this by adding the following code to your
JavaScript file:

//If you only want your code to affect certain screens/views, you should listen to Anguilla events like this:
$evt.addEventHandler($display, "start", onDisplayStarted);

// This callback is called when any view has finished loading
function onDisplayStarted() {

    $evt.removeEventHandler($display, "start", onDisplayStarted);

    if ($display.getView().getId() == "ComponentView") {
            //Work your magic!
       }
}

It’s easy once you know how 😉