Jump to content
Progress update on the IMF Integration Architecture - February 2022

Progress update on the IMF Integration Architecture - February 2022

Anne GUINARD
 Share


image.pngDescribed in the Pathway to the Information Management Framework, the Integration Architecture is one of the three key technical components of the Information Management Framework (IMF), along with the Reference Data Library and the Foundation Data Model. It consists of the technology and protocols that will enable the managed sharing of data across the National Digital Twin (NDT).

The IMF Integration Architecture (IA) team began designing and building the IA in April 2021. This blog gives an insight on its progress to date.

 

 


Principles

First, it is worth covering some of the key principles being used by the team to guide the design and build of the IA:

  • Open Source: It is vital that the software and technology that drives the IA are not held in proprietary systems that raise barriers to entry and prevent community engagement and growth. The IA will be open source, allowing everyone to utilise the capability and drive it forward.






     
  • Federated: The IA does not create a single monolithic twin. When Data Owners establish their NDT Node, the IA will allow them to publish details of data they want to share to a NDT data catalogue, and then other users can browse, select and subscribe to the data they need to build a twin that is relevant to their needs. This subscription is on a node-to-node basis, not via a central twin or data hub, and Owners can specify the access, use, or time constraints that they may wish to apply to that subscriber. Once subscribed, the IA takes care of authenticating users and updating and synchronising data between nodes.
     
  • Data-driven access control: To build trust in the IA, Data Owners must be completely comfortable that they retain full control over who can access the data they share to the NDT. The IA will use an ABAC security model to allow owners to specify in fine-grained detail who can access their data, and permissions can be added or revoked very simply and transparently. This is implemented as data labels which accompany the data, providing instructions to receiving systems on how to protect the data.


     
  • IMF Ontology Driven:  NDT Information needs to be accessed seamlessly. The NDT needs a common language so that data can be shared consistently, and this language is being described in the IMF Ontology and Foundation Data Model being developed by another element of the IMF team. The IA team are working with them closely to create capabilities that will automate conversion of incoming data to the ontology and transact it across the architecture without requiring further “data wrangling” by users.
     
  • Simple Integration: To minimise the risk of implementation failure or poor engagement due architectural incompatibility or high cost of implementation, the IA needs to be simple to integrate into client environments. The IA will use well understood architectural patterns and technologies (for example REST, GraphQL) to minimise local disruption when data owners create an NDT node, and ensure that once implemented the ongoing focus of owner activity is on where the value is – the data – rather than maintenance of the systems that support it.
     
  • Cloud and On-Prem: An increasing number of organisations are moving operations to the cloud, but the IA team recognises that this may not be an option for everyone. Even when cloud strategies are adopted, the journey can be long and difficult, with hybridised options potentially being used in the medium to long term. The IA will support all these operating modes, ensuring the membership of the NDT does not negatively impact existing or emerging environment strategies.
  • Open Standards: for similar drivers behind making the IA open-source, the IA team is committed to ensuring that data in the NDT IA are never locked-in or held in inaccessible proprietary formats.
 

What has the IA team been up to this year?

The IMF chose to adopt the existing open-source Telicent CORE platform to handle the ingest, transformation and publishing of data to the IMF ontology within NDT nodes, and the focus has been on beginning to build and prove some of the additional technical elements required to make the cross-node transactional and security elements of the IA function. Key focus areas were:

  • Creation of a federation capability to allow Asset Owners to publish, share and consume data across nodes
     
  • Adding ABAC security to allow Asset Owners to specify fine-grain access to data
     
  • Building a ‘Model Railway’ to create an end-to-end test bed for the NDT Integration Architecture, and prove-out deployment in containers


    image.png

 

 

  • Like 1
  • Thanks 1
 Share


User Feedback

Recommended Comments

It was a very good presentation on this topic by Ian on today's(22nd Feb) DTH Gemini call. I had posted a question that was not taken up during the call, may be due to lack of time. Hence I am posting it here again.

There was a mention of both open source and federated nodes during the presentation(and also in the above article). So I was wondering if there is a thought given to using open source federated and distributed backends such as Linux Foundation's Iroha project. It is a distributed "ledger" backed by open source databases like Postgres.

 

Link to comment
Share on other sites

Hi Ajeeth

We're trying to keep an open mind when it comes to distributed ledger and blockchain. It is quite hard to find business problems that the technology solves that aren't already solved in better ways by other (well established) technologies. There is also the environmental cost of cryptographic proof which must always be weighed against any perceived advantage of blockchain - though it looks like novel proof approaches are emerging that are less disastrous in terms of energy consumption. 

We had DL and DApps in mind for experimentation this year, but the tech is just not there yet in terms of scale and throughput for what we want to do. We'll keep re-visiting it though, as this is interesting stuff. We had a lot of Etherium folks pushing solutions at one point, but that really did feel like a hammer looking for nails. From what I've read about Iroha, there may be some promise there - we'll keep an eye on it. The acid test has to be whether it can do the job better than traditional tech though.

We're using Apache Kafka right now, but we're not wedded to it. Iroha may be an option for future builds. 

Ian

Link to comment
Share on other sites



Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Top
×
×
  • Create New...