Pages

Monday, 25 July 2016

A Beginners Guide to OpenIDM - Part 3 - Connectors

Previously in this series we have looked at a general overview of OpenIDM and had a detailed look at objects. In this blog I want to explore connectors.

Connectors are the integration glue that enables you to bring data into OpenIDM from all sorts of different systems and data stores. We will take a look at the different types of connectors available in OpenIDM, how they work and end with a practical example of how to actually configure a connector.

This blog continues my OpenIDM Beginners series, catch up with the links below:

A Beginners Guide to OpenIDM - Part 1
A Beginners Guide to OpenIDM - Part 2 - Objects
A Beginners Guide to OpenIDM - Part 3 - Connectors
A Beginners Guide to OpenIDM - Part 4 - Mappings

Connectors

Architecture

Every identity system that I have ever worked with has a concept similar of a connector. Usually they comprise of Java libraries or scripts that perform the actual push and pull of data to and from a target data source.

Standard connector operations in OpenIDM include:
  • Create: Create a new object ( usually an account ) in a target data store.
  • Update: Update an existing object e.g. if a user changes their email address then we may want to update the user record in a target data store.
  • Get: Retrieve a specific instance of an object ( e.g. an account) from a target data store.
  • Search: Query the collection and return a specific set of results.
There are a number of other operations which we will explore in later blogs.

At a high level connectors are comprised of:
  • Provisioner configuration: configuration data defining the connector usually containing:
    • Reference to the underlying Java class that implements the connector. This should be populated automatically when you choose your connector type. You can explore the connector source code if you like but for the most part you shouldn't need to be concerned with the underlying classes.
    • All of the credentials and configuration needed to access the data store. You need to configure this.
    • The data store schema for the object or account. You need to configure this.
Connectors are configured through the user interface but like all OpenIDM configuration they are also stored ( and can be edited ) locally on the file system. Connector configuration files ( like most OpenIDM configuration files) can be found in openidm/conf and have the following naming convention:

provisioner.openicf-something.json ( where something is whatever you have named your connector ).

Note connector configuration files will not appear unless you have configured a connector using the UI, we will revisit this later.

The logical flow in OpenIDM for utilising connectors is as follows:
  • Data Synchronization engine outputs data and a requested operation e.g. create, delete, update or one of several others
  • Provisioner engine invokes the connector class with the requested operation and the data from the synchronization engine.
  • Connector class uses the configuration parameters from the provisioner file and the data passed in the invocation to actually do the work and push or pull to or from the target.

Connector Example

So now we have a basic understanding of how connectors work, lets try configuring one.

I'm going to use the CSV connector for this example and we are going to read users from a Comma Seperate Value list. Ultimately we will be reading this data into the managed user object using a mapping. For this blog though we will just focus on configuring the connector.

Feel free to use any CSV file but if you want to follow along with the example then download the CSV here that I created using Mockaroo.



Copy the file to somewhere on the same file system that OpenIDM has been installed on, it doesn't matter where so long as OpenIDM can access it. I'm going to use /home/data/users.csv

Then log in to OpenIDM as administrator. Navigate to Configure, then Connectors.



Press "New Connector"



You wil see the UI for configuring a new connector:




Give your new connector a name (I have used UserLoadCSV above - no spaces permitted), and look at the connector types. These are all the different systems you can integrate with.

Note that with further configuration, more connectors are available, and using the scripted connector you can pretty much integrate with any system that offers a suitable API.

Select the "CSV File Connector". Now we need to complete the "Base Connector Details". Starting with the path to the CSV File we actually want to process.


Now let's take a look at the next few fields:



They are populated by default but we need to configure these up to match our spreadsheet.

Looking at the data:
  • Header UID = id
  • Header Name = username

So in this instance we just need to change the Header UID to match.


You will note there are a few more fields:
  • Header Password: We will not be processing any passwords from this CSV, that might be something you want to do, although typically you will have OpenIDM generate passwords for you ( more on that later ).
  • Quote Character: If you have an unusually formatted CSV, you can change the character that surrounds your data values. This is used by OpenIDM to successfully parse the CSV file.
  • Field Delimiter: Similarly if you are using a delimiter ( the character that splits up data entries ) that is anything other than a "," you can tell OpenIDM here.
  • Newline String: As above.
  • Sync Retention Count: Todo

Note that these parameters are all unique to the CSV connector. If you were to use another connector, say the database connector, you would have a different set of parameters that must be configure for OpenIDM to successfully connect to the database and query the table.

Ok, with all that done lets add the connector:



All being well you should get a positive confirmation message. Congratulations, you have added a connector! All very well but what can we do with it?

Click on the menu option ( the vertical dots):


Then Data (_ACCOUNT_)



If you have done everything correctly you should see the data from the CSV in OpenIDM!



It is important to understand, that at this point the data has not been loaded into OpenIDM, OpenIDM is simply providing a live view of the data in the CSV. This works for any connector and we will revisit it at the end of this blog.

Before that, there are a few things I want to cover. Go back to the Connector screen, you should have a new Connector:



Select it, and select "Object Types":



Then edit "_ACCOUNT_". 





What you should be able to see if a list of all of the attributes in the CSV file. OpenIDM has automatically parsed the CSV and built a schema for interpreting the data. You may also spot "__NAME__". This is a special attribute, and it maps to the  Header Name attribute we configured earlier.


Again, the concept of Object Type is universal to all our connectors and sometimes additional configuration of the Object Type may be required in order to successfully process data.

Finally, let's take a look at Sync:


On this page you can configure LiveSync. LiveSync is a special case of synchronization. Ordinarily synchronization is performed through the mappings interface ( or automatically on a schedule ).

However if the connector and target system support it, then LiveSync can be used. With LiveSync changes are picked up as they occur in the target. Ordinarily with a normal synchronization ( often called reconciliation ) all accounts in the target must be examined against the source for changes. With LiveSync, only accounts in the target that have changed will be processed. For this to work the target must support some form of change log that OpenIDM can read. In systems with large numbers of accounts this is a much more efficient way of keeping data in sync.

Connectors And The REST API

As before, we can make use of the REST API here to query our new connector. We can actually use the API to read or write to the underlying CSV data store. Just take a moment to think about what that means. In an enterprise implementation you might have hundreds of different data stores of every type. Once you have configured connectors to OpenIDM you can query those data stores using a single, consistent and centralised RESTful API via OpenIDM. That really is a very powerful tool.

Let's take a look at this now. Navigate back to the data accounts page from earlier:


Take a look at the URL:


As before, this corresponds to our REST API. Please fire up Postman again. 

Enter the following URL

http://localhost.localdomain.com:8080/openidm/system/UserLoadCSV/__ACCOUNT__?_queryId=query-all-ids

You should see the following result



We have just queried the CSV file using the REST API, and retrieved the list of usernames.

Let's try retrieving the data for a specific user:

http://localhost.localdomain.com:8080/openidm/system/UserLoadCSV/__ACCOUNT__?_queryFilter=/email eq "tgardner0@nsw.gov.au"


Here we are searching for the user with the email address tgardner0@nsw.gov.au.



Again, this is just a small sample of what the REST API is capable of, you can learn much more here: 

https://forgerock.org/openidm/doc/bootstrap/integrators-guide/index.html#appendix-rest

And more on how queries work here: 

https://forgerock.org/openidm/doc/bootstrap/integrators-guide/#constructing-queries


Come back next time for a look at mappings where we will join together the managed user and the connector to actually create some users in the system. 

Wednesday, 6 July 2016

It's The Little Things - Authentication Chains

Authentication Chains

We have not talked much about OpenAM on the blog. AM has some really great features that make it very simple to use. Perhaps my favourite feature is the authentication chains UI.

Let's take a quick look at what an authentication chain looks like, then we will talk through it and have a go at creating a brand new one. I assume you are using OpenAM 13.
You can see what an auth chain looks like above. Essentially it is a series of steps ( I think of them as Lego like building blocks ) for authentication. Each block represents a different mechanism for authenticating. In addition each block is also assigned one of four authentication behaviors (required, optional, requisite & sufficient) which determine how ( and if ) one block flows into the next depending whether that block succeeds.

As stated above, successful authentication required at least one pass and no fail flags.

In the above example there are four blocks, lets look at each in turn:

  • DataStore: Basic username and password authentication against the OpenAM data store. If this step is a:
    • FAIL: The user hasn't even got their username and password right. We definitely are not letting them in, and as such exit the chain with a FAIL.
    • PASS: The username and password is correct. We move to the next block in the chain DeviceMatch.
  • DeviceMatch: First step of device match authentication ( essentially asking the question: has OpenAM seen the use log in from this device before? ). If this step is a:
    • CONTINUE: OpenAM has not seen the user log in using this particular laptop or mobile before. This block has failed but, because it is sufficient this does not equate to a fail flag. We have to be a bit more suspicious and go into the TwoFactor block.  
    • PASS: This is a device the user has used before and OpenAM recognises it. At this point the user has authenticated with username and password from a recognised device. We exit the chain with a PASS. 
  • TwoFactor: Challenge the user to provide the code from a two factor mobile soft token. This second factor proves that not only does the user have the right username and password, but also that they have the mobile device they originally registered with in their possession. If this step is a:
    • FAIL: The user has failed 2FA. At this point we don't have the confidence this is really the user being claimed and exit with a FAIL.
    • PASS:
  • DeviceSave: The last step of device match authentication. We save a record of the device so we can match it next time in the DeviceMatch step. If this step is a fail:
    • FAIL: The user is not actually being challenge for anything. Authentication is complete. We just need to save the device which will not fail.
    • PASS: We have now saved the device, in future, so long as the user continues to use this particular laptop or mobile to login. They will not have to do the TwoFactor step.

Note that I have chosen the above authentication "blocks" for this particular blog. I could easily have used others. There are many different types of blocks available in OpenAM covering nearly every conceivable authentication requirement.

I think the way OpenAM allows you to quickly use these building blocks to build authentication visually is really neat.

Let's now try building the above chain in OpenAM.

Building an Authentication Chain

Firstly we need to create the authentication building blocks we want. I am going to assume you have an installation of OpenAM up and running with a Top Level Realm configured ( though you can do this any realm ).

Select the realm:


And navigate to Authentication, then Modules.

Out of the box the above modules are configured. We need to configure a few more.

Press Add Module, select "Device Match" from the drop down and give it a name ( I used DeviceMatch earlier ).


Press Create and you should see the configuration screen:


The defaults are fine here, just press Save Changes.

Now repeat the last two steps for the Device Id ( Save ) and ForgeRock Authenticator (OATH) modules.

When this is done you should have the following modules:


Now we need to create a new authentication chain. Navigate to Authentication, then Chains.



Press Add Chain, and give it a name ( I used secureAuthService above ) then press Create, you will now have an empty authentication chain.


Now just Add Module's. You don't have to worry about the order, just add all the modules as in my example at the start of this blog:


If you get the order wrong, don't worry about it! Just drag and drop authentication blocks to move them around. Ensure you have set the Criteria as follows:

DataStore: Requisite
DeviceMatch: Sufficient
TwoFactor: Requisite
DeviceMatch: Required

Save Changes and you are done. That's all there is to it!

Not quite... there is one additional step I want to do here. By default Two Factor is optional for end users. In some cases that is desirable, it's an additional security control and if you are big retailer you don't want to force it on users but you do want it to be an option for them.

However in this demo, I want to make it mandatory to do so, navigate to Authentication, Settings, then General and check the Two Factor Authentication Mandatory box.


Then Save Changes.

Testing the Authentication Chain

So how do we test the authentication chain? Well, remember we named it secureAuthService? Let's try logging in using the following URL:

http://localhost.localdomain.com:18080/openam/login?service=secureAuthService

Then try entering the standard demo and changeit credentials.


You would normally be logged into OpenAM at this point, however instead, you should see the following:

This is the DeviceMatch module doing it's work. Make sure to press Share Location

Note: this is just a default and that capturing location is optional.

As this is the first time I am logging in using this device. I need to use the ForgeRock authenticator as a second factor.

Note: for this explanation I have already download the ForgeRock authenticator from the Apple App ( or Google Play ) stores. I have also already registered it with OpenAM. The first time you do this you will be asked to register and need to take a photo of a QR code in OpenAM. This is relatively straight forward but feel free to leave questions in the comments.


I now enter the code generated by the ForgeRock authenticator on my phone, and assuming I get that right and press SUBMIT. I am then asked if I want to trust this device ( the laptop I am logging in from ) and to give it a name:



After which I am successfully logged into OpenAM!


Now, if you try logging out and back in. You won't be challenged for 2FA authentication! So long as you are using the same laptop.

One more thing. If you log in again and navigate to DASHBOARD, you can see the trusted profile for your Laptop and the 2FA token. If you want you can delete the trusted profile, at which point OpenAM no longer knows about your laptop and will challenge you for 2FA again.


Authentication chains are really easy to understand and configure, and incredibly powerful.





Monday, 4 July 2016

A Beginners Guide to OpenIDM - Part 2 - Objects

This blog continues my OpenIDM Beginners series, catch up with the links below:

A Beginners Guide to OpenIDM - Part 1
A Beginners Guide to OpenIDM - Part 2 - Objects
A Beginners Guide to OpenIDM - Part 3 - Connectors
A Beginners Guide to OpenIDM - Part 4 - Mappings
A Beginners Guide to OpenIDM - Part 5 - User Registration
A Beginners Guide to OpenIDM - Part 6 - Provisioning to Active Directory

Overview

At the heart of OpenIDM are managed objects. Out of the box three managed objects are configured:
  • User: User identities, effectively this is your central identity store.
  • Role: An object for modelling roles.
  • Assignment: An object for modelling assignments. Assignments are effectively ways of capturing sets of entitlements across mapping. Which can then be associated with roles.
In this blog we will examine the user managed object in detail, roles and assignments will be explored later in the series.

It is important to understand that objects can really be anything and you can create new objects very easily. This is an incredibly powerful way to model all sorts of different things:
Users, Organisations, Teams, Devices, Products and anything else you can think of! Managed objects are completely configurable.

Not only can you model things, but you can also model the relationships between things. For example:
  • Which organisations a user belongs to.
  • The devices that a user owns.
  • The products a user has.
  • The teams that belong to an organisation.
  • Anything else you can think of!

Objects

All objects have the following properties:

  • Details: The name and icon that represents the object in the UI.
  • Schema: Properties, their validation rules and their relationships.
  • Scripts: Different hooks for running scripts throughout the object lifecycle e.g. postCreate
  • Properties: Rules for special attribute behaviors e.g. passwords should be encrypted and private.
Lets look at each of this in detail.

Details

Not much to say here. Just the name of your object and you can select a funky icon that will be displayed throughout the interface wherever your object is used.
Schema

The properties that actually comprise your object. Lets take a look at the managed user schema.




On the left, under Schema Properties you can see each property that comprises a user. There are many properties available out of the box and you can easily add or remove properties as required.

Let's look at a property in detail.




So what does a property comprise of:


  • Property Name: The internal name users within the OpenIDM platform to refer to the property, think of it like a variable name only used internally.
  • Readable Title: The name that will be used to refer to the property in the user interface.
  • Description: Simple description of the attribute that when populated is used throughout the interface as a tooltip.
  • Viewable: Can it be seen in the UI?
  • Searchable: Is it indexed and searchable in the UI?
  • End users allowed to edit: Used are allowed to update the value using self service.
  • Minimum Length: Minimum length of the attribute value.
  • Pattern: Any specific pattern to which the value of the property must adhere. e.g. date formats.
  • Validation Policies: Rules that can be used to define attribute behavior. We will look at these in detail in a moment.
  • Required: Must be populated with a value.
  • Return by Default: If true, will be returned when user details are requested via the API. If false, will only be returned if specifically asked for.
  • Type: Type of the attribute: String, Array, Boolean, Integer, Number. Object or Relationship. We will look at relationships in a moment.

Validation Policies

Validation policies are ways to validate the attribute. The example below checks that the mail attribute is a valid email address. This prevents the user from inputting an invalid email address during self registration or an administrator changing the email incorrectly.



Similarly for the password attribute validation policies allow you to enforce password rules, for example:




Relationships

Relationships are incredibly powerful and really at the heart of what OpenIDM does. If you have installed OpenIDM in part 1 then I recommend you take a look at the out of the box managed objects to really understand this, however we will briefly discuss it.

The out of the box managed user object defines a relationship between managers and reports.


manager:


reports:


What are we saying here?


  • User's have a manager. This is a Relationship. It is in fact a reverse relationship. As manager A, has reports X,Y,Z and reports X,Y,Z have the manager A.
  • User's can also have reports. They may have multiple reports. Note this is an Array of Relationships: A manages X, A manages Y, A manages Z. Likewise this is a reverse relationship.
Relationships let you model relationships between all sorts of types of objects, users, organisations, devices, products, anything.

Scripts

Objects also have events which can be used to trigger scripts or workflows.



Out of the box, the above scripts are configured:

onCreate: The script that runs when the object is created. In this case, a script used to set the default fields for a user.

onDelete
: The script that runs when the object is deleted. In this case, a script is used to cleanup users after deletion.

These scripts are completely configurable and new scripts can easily be added.

If you try add a new script you will see there are two options:

  • Script - Event triggers a script, this could be:
    • Inline Script: script defined within the UI.
    • File Path: a script stored within the OpenIDM configuration directory. This is how out of the box scripts work. If you navigate to /openidm/bin/defaults/script/ui you can examine these out of the box scripts to see what they do.
  • Workflow - Event can be used to trigger a workflow.
Note: If you add new scripts, these should be placed somewhere else, usually: /usr/local/env/box/openidm/script

Scripting is a great way to do all sorts of things to help you manage object lifecycles.


Properties

Properties let you define additional behaviors for attributes.
  • Encrypted: The attribute value is encrypted. This means it can be decrypted and the value retrieved if required.
  • Private: Restricts HTTP access to sensitive data, if this is true the attribute is not returned when using the REST API.
  • Virtual: The attribute is calculated on the fly, usually from a script.
  • Hashed: The attribute is hashed. Hashing is a one way function and the usual way that passwords should be stored. You hash the password when a user registers for the first time. When they log in again subsequently you hash the password that they enter against the original password hash. If they match you know the passwords are the same. Crucially, it is impossible to take a hash and extract the original password from it.
An common example of using properties is to calculate effective roles. Effective roles are dynamically calculated using an out of the box script as a virtual property:



You can examine the script here: /openidm/bin/defaults/script/roles/effectiveRoles.js.

Managed Objects and the REST API

For the final part of this blog I want to take a look at something I think is pretty cool. The OpenIDM REST API.

All managed objects ( including the ones you can create yourself ) are automatically made available using a REST API.

Using the API you can Create, Return, Update and Delete objects ( CRUD ) as well as search for and query objects. We will dive into the REST API in a later series but we can do a quick demo just to get a feel for how it works.

I recommend downloading Postman for this, Postman is a plug in for Chrome that lets you easily invoke REST API's. You can grab it here: https://www.getpostman.com/

Once you have Postman. Log into OpenIDM as administrator and go to Manage, then User and create a new user:



Press Save. Now look at the URL:



Note the long string of letters and numbers. This is the object id for our new user.

Now if we go to Postman, we can setup a new request:



Make sure you populate the headers as I have above. Set the request to a GET and enter a URL to return. In our case:

http://localhost.localdomain.com:8080/openidm/managed/user/9a372a83-1ec0-4036-9a02-6557e8eb4ed7


How does this break down:



Now, if you press Send, you should retrieve the user we just created:



This is just a small taster of what the REST API can do and we will explore it in much more detail in later blogs. You can also read all about the REST API here:

https://forgerock.org/openidm/doc/bootstrap/integrators-guide/index.html#appendix-rest