Active Directory, Azure Active Directory and Office 365

Apologies for the absence, I am engaged in a Masters degree as well as my SharePoint Career and being a good mummy! The pressure is on! And I’ve got another Nuts challenge 1st of March. I now realise why I was so happy to be done with my first degree….The requirement for prefect references has come along way since 2006!

I have a new client who has a requirement to move their current Java based intranet to SharePoint online, specifically Office 365.  They have Active Directory (on premise) already in place and do not want to change this as they have other application which are dependant on it. They also have Azure Active Directory (AAD) in place for other application so my solution will need to include these elements. My first question was why they needed AAD as Office 365 can work perfectly without it. Answer is they have several other Cloud applications that require authentication through AAD so this must be used. I needed to do some research into using an on premise Active Directory with AAD and Office 365, to see if they could mesh well together. Someone told me that the primary AD controller would have to be moved to Azure.

My research suggests that the on premise AD works very well with AAD (Azure AD) managing credentials for O365. My question to them would be why they needed it in the first place as O365 and on premise AD will work perfectly well with O365.  To hazard a guess would  be because they have several different types of credentials per user, e.g. Windows Live Account as well as the organisational account. I have both of these for Uni, the windows live account allows me to log on to OWA and access library services. The Organisational account is for when I’m logged in to the online portal or the desktop anywhere application (replicating conditions as if I were on campus)

My understanding of it is as follows:

What Microsoft have done is taken Active Directory, a widely deployed, enterprise-grade identity management solution, and made it operate in the cloud as a multitenant service with Internet scale, high availability, and integrated disaster recovery. You are able to have your AD on premise for all domains. Then you can use Windows Azure Active Directory to give users single sign-on to Software as a Service (SaaS) applications, O365 included. Applications running in the cloud or on-premises can use Windows Azure Active Directory Access Control to let users log in using identities from Facebook, Google, Microsoft, and other identity providers. To support this, Microsoft makes it easy to “connect” Windows Azure Active Directory with an existing directory. At the technical level, organizations can enable identity federation and directory synchronization between an existing Active Directory deployment and Windows Azure Active Directory.

When an organization does this, its Active Directory is, in a sense, stretching over both an on-premises and a cloud deployment. The ability for Active Directory to operate across both on-premises and cloud deployments in a hybrid mode enables an organization to easily take advantage of new cloud-based platforms and SaaS applications, while all of its existing identity management processes and application integration can continue unaffected.

In addition, being able to operate in this hybrid mode is critical for some organizations because of business or regulatory requirements that mandate that certain critical information, such as passwords, be maintained in on-premises servers.

If you don’t have your own on premise Active Directory, and you sign up for Office 365, Microsoft automatically create a new Windows Azure Active Directory that is associated with the Office 365 account. No action is required on the part of the individual signing up. This is the most simple approach, involving on premise AD is a more hybrid deployment.

In the diagram above, an organization can federate Windows Server Active Directory with Windows Azure Active Directory to give its users single sign-on to SaaS applications. In this scenario, a user at organization B wishes to access a SaaS application. Before she does this, the organization’s directory administrators must establish a federation relationship with Windows Azure AD using AD FS, as the figure shows. Those admins must also configure data synchronization between the organization’s on-premises Windows Server AD and Windows Azure AD. This automatically copies user and group information from the on-premises directory to Windows Azure AD. Notice what this allows: In effect, the organization is extending its on-premises directory into the cloud. Combining Windows Server AD and Windows Azure AD in this way gives the organization a directory service that can be managed as a single entity, while still having a footprint both on-premises and in the cloud.

To use Windows Azure AD, the user first logs in to her on-premises Active Directory domain as usual (step 1). When she tries to access the SaaS application (step 2), the federation process results in Windows Azure AD issuing her a token for this application (step 3). As before, this token contains information that identifies the user, and it’s digitally signed by Windows Azure AD. This token is then sent to the SaaS application (step 4), which validates the token’s signature and uses its contents (step 5). And is in the previous scenario, the SaaS application can use the Graph API to learn more about this user if necessary (step 6).

Today, Windows Azure AD isn’t a complete replacement for on-premises Windows Server AD. As already mentioned, the cloud directory has a much simpler schema, and it’s also missing things such as group policy, the ability to store information about machines, and support for LDAP. (In fact, a Windows machine can’t be configured to let users log in to it using nothing but Windows Azure AD – this isn’t a supported scenario.) Instead, the initial goals of Windows Azure AD include letting enterprise users access applications in the cloud without maintaining a separate login and freeing on-premises directory administrators from manually synchronizing their on-premises directory with every SaaS application their organization uses. Over time, however, expect this cloud directory service to address a wider range of scenarios.

Since 2011 Microsoft have enhanced Windows Azure Active Directory by adding new, Internet-focused connectivity, mobility, and collaboration capabilities that offer value to applications running anywhere and on any platform. This includes applications running on mobile devices like iPhone, cloud platforms like Amazon Web Services, and technologies like Java. So it doesn’t only have to be Microsoft technologies it utilises, it can be multi-platform.

Azure Active Directory Access Control

Windows Azure Active Directory (WAAD/ AAD) can give an organization’s users single sign-on to multiple SaaS applications, for example. But identity technologies in the cloud can also be used in other ways.

Suppose, for instance, that an application wishes to let its users log in using tokens issued by multiple identity providers (IdPs). Lots of different identity providers exist today, including Facebook, Google, Microsoft, and others, and applications frequently let users sign in using one of these identities. Why should an application bother to maintain its own list of users and passwords when it can instead rely on identities that already exist? Accepting existing identities makes life simpler both for users, who have one less username and password to remember, and for the people who create the application, who no longer need to maintain their own lists of usernames and passwords.

But while every identity provider issues some kind of token, those tokens aren’t standard – each IdP has its own format. Furthermore, the information in those tokens also isn’t standard. An application that wishes to accept tokens issued by, say, Facebook, Google, and Microsoft is faced with the challenge of writing unique code to handle each of these different formats.

But why do this? Why not instead create an intermediary that can generate a single token format with a common representation of identity information? This approach would make life simpler for the developers who create applications, since they now need to handle only one kind of token. Windows Azure Active Directory Access Control does exactly this, providing an intermediary in the cloud for working with diverse tokens.

The image above shows how Windows Azure Active Directory Access Control makes it easier for applications to accept identity tokens issued by different identity providers.The process begins when a user attempts to access the application from a browser. The application redirects her to an IdP of her choice (and that the application also trusts). She authenticates herself to this IdP, such as by entering a username and password (step 1), and the IdP returns a token containing information about her (step 2).As the image shows, Access Control supports a range of different cloud-based IdPs, including accounts created by Google, Yahoo, Facebook, Microsoft (formerly known as Windows Live ID), and any OpenID provider. It also supports identifies created using Windows Azure Active Directory and, through federation with AD FS, Windows Server Active Directory. The goal is to cover the most commonly used identities today, whether they’re issued by IdPs in the cloud or on-premises.

Once the user’s browser has an IdP token from her chosen IdP, it sends this token to Access Control (step 3). Access Control validates the token, making sure that it really was issued by this IdP, then creates a new token according to the rules that have been defined for this application. Like Windows Azure Active Directory, Access Control is a multi-tenant service, but the tenants are applications rather than customer organizations. Each application can get its own namespace, as the figure shows, and can define various rules about authorization and more.These rules let each application’s administrator define how tokens from various IdPs should be transformed into an Access Control token. For example, if different IdPs use different types for representing usernames, Access Control rules can transform all of these into a common username type. Access Control then sends this new token back to the browser (step 4), which submits it to the application (step 5). Once it has the Access Control token, the application verifies that this token really was issued by Access Control, then uses the information it contains (step 6). While this process might seem a little complicated, it actually makes life significantly simpler for the creator of the application. Rather than handle diverse tokens containing different information, the application can accept identities issued by multiple identity providers while still receiving only a single token with familiar information. Also, rather than require each application to be configured to trust various IdPs, these trust relationships are instead maintained by Access Control – an application need only trust it. It’s worth pointing out that nothing about Access Control is tied to Windows – it could just as well be used by a Linux application that accepted only Google and Facebook identities. And even though Access Control is part of the Windows Azure Active Directory family, you can think of it as an entirely distinct service from what was described in the previous section. While both technologies work with identity, they address quite different problems (although Microsoft has said that it expects to integrate the two at some point).Working with identity is important in nearly every application. The goal of Access Control is to make it easier for developers to create applications that accept identities from diverse identity providers. By putting this service in the cloud, Microsoft has made it available to any application running on any platform.

There is no requirement for a Primary AD controller to be moved to Azure, you just ‘connect’ and sync up.


Setting it up involves running a few PowerShell commands to set up a trust between ADFS and AAD. SSO needs to be implemented and the domain must be a top level domain. Full steps here

This is the only recommended implemented approach I have found for this hybrid deployment. If anyone can highlight other approaches I would be most interested to know.

Over and Out


SharePoint 2013 Search not provisioning correctly

Imagine one fine sunny day you built a new SharePoint 2013 Search Application, for a new server or an old one, and the rain came in the form of an error accessing the application after Central Admin stated clearly that it was created successfully. Here’s what you would be experiencing:

After creating the Search Service Application, the following issues arise:

  1. The search topology displays the following message: “Unable to retrieve topology component health states. This may be because the admin component is not up and running.”
  2. The default content source “Local SharePoint Sites” is inconsistent. It doesn’t always appear after creation of Search, sometimes with start addresses of existing web apps listed already, other times not.
  3. Starting a full crawl results in stuck in ‘starting’.

So you can’t configure your search. In PowerShell, all search components are listed as available.

Event Viewer shows:

Application Server Administration job failed for service instance Microsoft.Office.Server.Search.Administration.SearchServiceInstance (cf15c8c7-1980-4391-bd97-17de75c4dd5d).

Reason: Failed to connect to system manager. SystemManagerLocations: http://sp2013/AdminComponent1/Management

Technical Support Details:

System.InvalidOperationException: Failed to connect to system manager. SystemManagerLocations: http://sp2013/AdminComponent1/Management—> System.ServiceModel.EndpointNotFoundException: There was no endpoint listening at http://sp2013/AdminComponent1/Management/NodeController that could accept the message. This is often caused by an incorrect address or SOAP action. See InnerException, if present, for more details.

Server stack trace:

   at System.ServiceModel.Channels.ConnectionUpgradeHelper.DecodeFramingFault(ClientFramingDecoder decoder, IConnection connection, Uri via, String contentType, TimeoutHelper& timeoutHelper)

   at System.ServiceModel.Channels.ClientFramingDuplexSessionChannel.SendPreamble(IConnection connection, ArraySegment`1 preamble, TimeoutHelper& timeoutHelper)

   at System.ServiceModel.Channels.ClientFramingDuplexSessionChannel.DuplexConnectionPoolHelper.AcceptPooledConnection(IConnection connection, TimeoutHelper& timeoutHelper)

   at System.ServiceModel.Channels.ConnectionPoolHelper.EstablishConnection(TimeSpan timeout)

   at System.ServiceModel.Channels.ClientFramingDuplexSessionChannel.OnOpen(TimeSpan timeout)

   at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout)

   at System.ServiceModel.Channels.ServiceChannel.OnOpen(TimeSpan timeout)

   at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout)

   at System.ServiceModel.Channels.ServiceChannel.CallOnceManager.CallOnce(TimeSpan timeout, CallOnceManager cascade)

   at System.ServiceModel.Channels.ServiceChannel.EnsureOpened(TimeSpan timeout)

   at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs, TimeSpan timeout)

   at System.ServiceModel.Channels.ServiceChannelProxy.InvokeService(IMethodCallMessage methodCall, ProxyOperationRuntime operation)

   at System.ServiceModel.Channels.ServiceChannelProxy.Invoke(IMessage message)

Exception rethrown at [0]:

   at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg)

   at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type)

   at Microsoft.Ceres.CoreServices.Services.Node.INodeControllerManagementAgent.get_NodeName()

   at Microsoft.Ceres.CoreServices.Tools.Management.Client.AbstractSystemClient.TryConnect(Uri nodeManagementUri, ICollection`1 locationsToTry, ICollection`1 triedLocations, ICollection`1 nodeSystemManagerLocations)

   — End of inner exception stack trace —

   at Microsoft.Office.Server.Search.Administration.SearchServiceInstance.Synchronize()

   at Microsoft.Office.Server.Administration.ApplicationServerJob.ProvisionLocalSharedServiceInstances(Boolean isAdministrationServiceJob)

After much digging and playing around, it seems this error goes away if I changed the app pool the search was running under to the SharePoint Web Services Default. But this is not what we want, Search should run under its own app pool. So after much more digging, I discovered that this is another bug I have found in SharePoint 2013, fixable by applying the following hotfixes:

Give the server a good old reboot and things should work fine. That’ll save you a few days of trouble.

Peace love and happiness!

The Art of Installing SharePoint 2013 in a 3 Tier Topology – Part One

Ok, so you’re a support person/ newbie/ trainee SharePoint guru, and you have been asked to install SharePoint 2013 with the following topology or similar:

Now SharePoint 2013 is similar to 2010 but they are very different. Although this topology is quite rare, I had to re-create it for my app development, architecture and POC creation, and I’m going to take you through the steps of creating it from scratch. I’m going to assume this is your first SharePoint installation, so all the gurus out there, be patient! It could be a long post so I might put some links to the same information I used for reference to help me complete some of my tasks. There’s a lot of info out there on installing SharePoint 2013 (SP2013 from here on in) so by all means, have a gander and design your own custom installation solution using different aspects of others as I have done.

First off please read and understand the hardware and software requirements for SP2013, they are very important, the nature of the beast is that SharePoint is getting more splendid and all these beautiful features require more and more memory. – this is a good link to read to get you thinking about browser and IP support as well as software boundaries, limits and capacity management. Before installing SharePoint 2013, you need to really understand what it is you are doing, it will save many days of frustration and prevent many grey hairs! Ok, so here we go…

Hardware RoadMap

I created all of this on a machine with 16GB RAM, my OS was Server 2008 r2 with Hyper V role installed. All servers are on 64 bit processors.

My plan for memory allocation was as such:

AD Server – 2GB Ram and 80GB Hard drive On 64 bit processor

Database Server – 3GB and 80GB Hard drive (8GB Ram usual for SP2013 but I have used 3 and not suffered any repercussions yet, you can always increase this if the need arises especially if you are using Hyper environments) On 64 bit processor

SharePoint Server – 8GB and 80GB hard drive (Microsoft recommends at least 12 GB for production deployments but really, I think they are being overly cautious because of performance issues (Page Loads etc.) encountered with SharePoint 2010. I work with my 8GB just fine, no problems yet, but this can always be increased if required). On 64 bit processors

I created all these hyper V servers using my Hyper V manager on my guest OS. See these 2 links to create a virtual machine from a guest OS (Windows Server 2008 R2 in my case):

The next links helped me understand the different network options available in Hyper V and how basic networking in Hyper V environments work:

One other very important step is providing internet access to your machines once you have created them. I was connecting to the internet on my guest OS using a wireless connection and used the following link to facilitate that:

Software RoadMap

My software list is as follows:

AD Server – Windows 2008 R2 with AD Services role installed. I used the following link to help with setting up my domain and promoting my server to be a domain controller:

To install the OS once I had created my hyper V machine was to attach the installation media, i.e. point Hyper V to where your Windows Server 2008 r2 installation media is on your guest OS. Don’t forget to install DHCP, WINS and Enterprise Certificate Services.

*Note Before I installed the AD role on the server I update the machine with the latest service packs and updates as the Domain controller is not really meant to have any internet access for security reasons.

SQL Server – Windows Server 2012 and SQL Server 2012 standard (No longer enterprise version of Windows now, only Standard and DataCenter).

I used this link to help me set up Server 2012 and add it to my domain (Created on the domain controller):

The process of installing SQL is quite laborious so I will help you through that one in the next blog post. Don’t forget to install updates after you have installed all your required software.

SharePoint Server – Windows Server 2012, SharePoint Server 2013, Visual Studio 2013, Office 2013, SharePoint Designer 2013, and any other Dev tools you usually use. I included ULS viewer and CSK Dev Visual Studio tools from Codeplex, SP2013 Client Component Tools SDK (, and whatever else you might use for your development machine.

So I knew I had a big task ahead of me to set this all up, so I made a list of all my required tasks and it looked something like this:

  1. Create, install and configure AD Server and Domain and create Service and Farm accounts (See SharePoint installation blog post for a list of these)
  2. Create, install and configure Database Server
    1. Install Server 2012 and configure
    2. Install SQL Server 2012
    3. Add SharePoint Admin and Farm account to Database server and apply correct perms (Spadmin needs DBCreator and Security Admin permission as well as an admin on the SQL and SharePoint servers). See here ( and here (
    4. Set Up Max degree of parallelism ( and some info here (
    5. Configure SQL Server Security (
  3. Create, install and configure SharePoint 2013 Server
    1. Set up server with Web server and Application server roles
    2. Install and configure SharePoint 2013
    3. Install and configure Visual Studio 2013, Office 2013, SharePoint Designer 2013, ULS Viewer and other software
    4. Configure Managed Accounts and set up logging, email etc
    5. Set Up AD synchronisation
    6. Download and install SP2013 Client Components SDK
    7. Update all servers
    8. Backup all servers and create future backup schedule
  4. Pour a much needed Long island Ice tea and assume the recovery position! Lol that’s optional!

Ok so for part 2, I’m gonna assume you have your own plan ready to go, and you have an AD server with a working domain. Ok Lets install and configure SQL 2012