Jump to content

Articles & Publications      Shared by the Community

    Dip into this series of industry blogs sent to us by Bentley Systems. 
    Vertical Buildings: What Asset Owners and Contractors Need to Know about Britain’s New Building Safety Regulator
    Analysis of the U.K.’s Top 100 Construction Companies Shows Which Firms Performed Best Over the Past Decade
    Grand Paris Express: What We Learn from City Centre Transport Megaprojects in Paris and London
    A Smarter Way to Future-proof Our Water Supply
    The Wind of Change Is Blowing on Renewables, Making Them Cheaper and More Efficient, with the U.K. Ideally Placed to Benefit
    A Bird’s Eye View: How the World’s First Digital Twin of a Nation Can Help Create Better Cities
    The Nine Euro Ticket
    Leadership in a Data-driven Age: Why the Best Managers Will Always Welcome Greater Transparency and Why Fundamental Leadership Components Haven’t Changed
    For Electric Vehicle Charging, “Going Dutch” Means Being Open, Transparent, and Interoperable
    Since the Census Helps Plan Infrastructure and Housing, Could a National Framework for Data Help Overcome the Shortcomings of the COVID-19 Census?
    Regardless of Progress at COP27, We Are Getting on with Transforming and Decarbonising Infrastructure Delivery
     
    Do you have any material that would be of interest to our members? Please get in touch - contact me via DT Hub Messages.
     
    Read more...
    Contributors – Leigh Taylor, Garie Warne 
    The digital twin landscape has been revolutionized by the integration of control and automation technologies. These technologies play a crucial role in optimizing operations and maintenance for various industrial and infrastructure systems. The integration of these technologies into the digital twin landscape has enabled organizations to make informed decisions and improve the overall performance of their systems. In this article, we will discuss the Anglian Water OT (Operational Technology) strategy, how critical this is to a successful digital transformation and how it fits into the enterprise picture. Additionally, we will also discuss how a NRTM (Near Real Time Model) solution is being used within the delivery of Anglian Water’s Strategic Pipeline as a "system of systems" to aid operations and maintenance. 
    AW OT Strategy  
     
    The Anglian Water OT strategy is a highly important aspect of the digital twin landscape as it helps describe how these control systems should be implemented and used. It focuses on the use of control and automation technologies to optimize the performance of operational systems. The OT strategy is implemented by using various control and automation technologies, such as Industrial Internet of Things (IIoT) enabled SCADA systems linking into an Industry 4.0 approach with a central data core.  This approach takes the principles of Edge Driven (to ensure that the most up to date information can be used), Report by Exception (to minimise data transfer) and Open Architecture (to avoid vendor lock in). A further principle of Connect, Collect and Store, means that all data is enabled within a connection (to enable ease of future enhancements), what is needed to be looked at is physically collected and looked at, and only what is needed from a latest data update, trend analysis, or historical perspective is stored on a long-term basis. 
     
     
     
    Industry 4.0 
    The Industry 4.0 approach taken by Anglin Water’s Strategic Pipeline Alliance (SPA) differs from previous approaches in that data is placed at the core of the operations and silos, and point-to-point integration are removed. This allows data to be captured at the site (or edge) level and seamlessly be used throughout the enterprise. 
    SCADA systems are used to remotely monitor and control various aspects of industrial and infrastructure systems, such as pipelines, and water treatment facilities. In the digital twin landscape, SCADA systems are used to monitor the performance of infrastructure systems and make adjustments as needed to optimize their performance. For example, Anglian Water is using SCADA systems to monitor the flow of water through the Strategic Pipeline, as well as the functioning of pumps, valves, and other critical components. The SCADA and related control system is made up of several different components, including sensors and actuators that are placed along the pipeline to collect data, and a central control system that processes and displays this data in real-time, all of which must be fully compliant with National Infrastructure Standards (NIS) which governs infrastructure-based systems which are deemed to be critical national infrastructure. 
    SPA recognised that a key component in managing large infrastructure systems is the use of a "data core". A data core is a centralized repository for storing, processing, and analysing data from the pipeline and other systems. This data can include things like sensor data, control system data, and other operational data, as well as more IT centric data such as asset information, location data, hydraulic models, and BIM (Buildings Information Management) data.  
    By storing this data in a centralized location, Anglian Water can easily access and analyse it to identify any issues that need to be addressed. Our Data Core solution involves the use of a centralized data storage and processing system, which is integrated with the SCADA system and other technology systems to provide a holistic view of the pipeline and its surrounding infrastructure. This is a key difference between our approach and many other Proof of Concept activities within the market, as it is inherently scalable and able to more easily be productionised. 
    The implementation of the data core solution also provides opportunities for the development of a "Near Real Time Model" (NRTM) solution for SPA. An NRTM solution will allow Anglian Water to see how the pipeline is behaving in real-time and adjust as needed. By having this level of control, Anglian Water can ensure that the pipeline is operating at peak efficiency and minimize the risk of downtime or other issues. 
    SCADA Control 
     

    Control and automation technologies will be used within SPA to remotely monitor and control various aspects of industrial and infrastructure systems. From a control perspective to ensure the operation of what is a critical asset for supplying water to hundreds of thousands of customers, SPA has a three-layer approach to control, as seen in this diagram. The core layer (Pipeline and Sites) is based upon autonomous control of each individual site, with a further “last line of defence” of manual site-based control.  These are fully isolated from the outside world; however, as the SPA pipeline is highly complex, neither of these is a sustainable position to be in for long. 
    To automate and ensure that all of the 70+ sites that are linked to the SPA pipeline can operate effectively as a single system a SCADA (Supervisory Control and Data Acquisition) system acts as a Regional Control system over all the sites, ensuring that the right amount of wholesome water is received by the right customers at the right time primarily using a mass balance approach to ensuring that water is moved in a way that maximises supply against an agreed prioritisation. 
    Whilst this control system can ensure that the right amount of water is moved, it cannot optimise for cost, impacts related to the use of sustainable energy or other factors such as ensuring we optimise the amount of water we abstract, out of the ground over a yearly period. This is the job of a Near Real Time Model, which will try an optimise as much as possible these factors, making the SPA pipeline as efficient as possible. 
    NRTM Control 
    The NRTM solution is a "system of systems", as it integrates with multiple technology systems to provide a holistic view of the SPA pipeline and its surroundings. This integration allows Anglian Water to make informed decisions based on the data collected from multiple sources. The NRTM solution also allows for predictive maintenance, which is used to identify potential issues before they occur. Predictive maintenance will help Anglian Water to prevent downtime and minimize the need for costly repairs. In addition, the NRTM solution can also provide insights into the energy efficiency of the SPA pipeline, helping to reduce energy costs and improve the overall performance of the system.  
    Summary 
    In conclusion, the digital twin landscape is revolutionizing the way Anglian Water monitors and controls their infrastructure-based systems. The integration of control and automation technologies, the implementation of an OT strategy, and the use of a NRTM solution, are all critical components in the optimization of operations and maintenance. By using these technologies, Anglian Water can make informed decisions, improve the overall performance of their Strategic Pipeline system, and reduce downtime and costs. The digital twin landscape therefore provides a comprehensive and integrated view of complex water systems, allowing Anglian Water to manage their infrastructure more efficiently and effectively. 
    Read more...
    Whilst writing another article about loosely coupled systems and their data, I was struck by one internal reviewer’s comments about one uncontentious (at least to me) statement I made:
    The loosely coupled systems article was already getting a bit long, so I couldn’t go into any depth there. The loose end of the idea was still dangling, waiting for me to pull on it. So I pulled and… well, here we are in another article.
    I’ve always liked origins. I love it when someone crystallises an idea in a “Eureka!” moment, especially if, at the time they have the idea, they have no clue on the wide ranging impact their idea will have. Think Darwin’s “On the Origin of Species”, Mary Wollstonecraft’s “A Vindication of the Rights of Woman” (1792). A particular favourite of mine is the “Planck Postulate” (1900) which led to quantum mechanics.
    The origin story that’s relevant to all this is Tim Berners-Lee’s (TBL) “Vague but exciting…” diagram (1989) which was the origin of the world-wide web. I want you to take another look at it. (Or, if it’s your first look, be prepared to be awed about how much influence that sketch has had on humanity). There are a few things in that diagram that I want to highlight as important to this article:
    Information management Distributed systems Directed graphs …but don’t worry, we’re not going to get into the weeds just yet. 
    I want to introduce a use case and our main character. He’s not real, he’s an archetype. Let’s call him “Geoff”. Geoff is an example of a person in a vulnerable situation. Someone who could benefit from the Priority Service Register (PSR). Geoff lives alone and has the classic three health problems that affect our increasing ageing population: Chronic obstructive pulmonary disease (COPD), Type-2 Diabetes and Dementia.
    Geoff’s known to a lot of agencies. The Health Service have records about him, as do the Police, Utilities (Gas, Water, Electricity), The Post Office, Local Government and, as he has dementia, his local Supermarket. They have a collective responsibility to ensure Geoff has a healthy and fulfilling life. To execute on that responsibility it’s going to mean sharing data about him.
    Now we’re set in the arc of our story. We’ve got the four characteristics of these kind of problems:
    Heterogeneous data Diverse ownership Selfish agendas Overarching cooperative goal To expand: Data about Geoff is in various databases in different forms owned by assorted agencies. Each of those agencies has its own classifications for vulnerabilities, goals, aims and agenda - targets and KPIs to which they have to adhere - but remember they also have a joint responsibility to make sure Geoff is ok.
    In this article, I’d like to weave the three aspects of TBL’s diagram with the four characteristics of the problem space and see if we can “Solve for Geoff” and improve his experience.
    Let’s start with information management in distributed systems. The understanding of  “Distributed Systems” has moved on since 1989. What we’re talking about here is a “Decentralised” system. There’s not one place (in the centre) where we can put data about Geoff. Everyone has some information about him and we need to manage and share that information for the good of Geoff.
    If we imagine a couple of separate relational databases that have rows of data about Geoff, we’ll see there are two problems.
    Two different versions of Geoff Spotted them? They are:
    The names of the columns are different The identifying “key” data isn’t the same (Why would they be? They’re in different systems) To generalise: 1. is about metadata - the data about the data; 2. Is about identity.
    So, to metadata. In a relational database there is some metadata, but not much, and it’s pretty hidden. You’ve probably heard of SQL, but not of SQL’s cousin, DDL (data definition language). DDL is what defines the tables and their structure so the first example above would be something like:
    Data Definition Language for the Persons table What’s wrong with this? (I hear you ask). At least a couple of things. One is that there’s no description of what the terms mean. What does “Vulnerable” mean? And by defining it as a boolean, you’re either vulnerable or not. The other thing that’s very important in Geoff’s scenario is that this (incomplete and unhelpful) metadata is never exposed. You might get a CSV file with the column headings, and a word document explaining what they mean to humans, but that’s about it. Good luck trying to get a computer to understand that autonomously...
    A part of Tim Berners-Lee's "Vague but exciting" diagram I haven’t forgotten TBL’s diagram. In it, he hints at another way of describing data: using a directed graph. A graph has nodes (the trapeziums in his diagram) and edges (the lines with words on them). The directed bit is that they’re arrows, not just lines. He’s saying that there are a couple of entities: This Document and Tim Berners-Lee and that the entity called Tim Berners-Lee wrote the entity called This Document. (And, as it’s directed, The Document didn’t, and couldn’t, write Tim Berners-Lee.)
    Skipping forward blithely over many developments in computer science, we arrive at the Resource Description Framework (RDF) which is a mechanism for expressing these graphs in a machine-readable way. RDF is made of individual “Triples”, each one of which asserts a single truth. The triple refers to the three parts: subject, predicate and object (SPO). To paraphrase, the above bit of graph would be written:
    Subject                       Predicate      Object
    Tim Berners-Lee      Wrote            This document
    We can translate Geoff’s data into RDF, too. The following is expressed in the “Terse Triple Language” (TTL or “Turtle”) which is a nice compromise between human and machine readability.
    RDF version of Geoff's data The things before the colons in the triples are called “prefixes” and, together with the bit after, they’re a shorthand way to refer to the definition of the property (like fibo:hasDateOfBirth) or class (like foaf:Person). Notice that I’ve hyperlinked the definitions. This is because all terms used in RDF should be uniquely referenceable (by a URL) somewhere in an ontology. Go on, click the links to see what I mean.
    We’ve now bumped into one of the ideas that spun out of TBL’s first diagram: the Semantic Web. He went on (with two others) to describe it further in their seminal paper of 2001. All the signs were there in that first diagram, as we’ve just seen. Since then, the Semantic Web has been codified in a number standards, like RDF, with a query language, SPARQL, and myriad ontologies spanning multiple disciplines and domains of knowledge. The Semantic Web is often connected to the concept of “Linked Data”, in that you don’t have to possess the data: You can put a link to it in your data and let the WWW sort it out. foaf:Person from above is a small example of Linked Data - they’ve defined “Person” in a linkable way so I can use their definition by adding a link to it. We’ll get back to this in a bit.
    There are so many great reasons for encoding data in RDF. Interoperability being the greatest, in my opinion. I can send that chunk of RDF above to anybody and they (and their computers) should be able to understand it unambiguously as it’s completely self-contained and self-describing (via the links). 
    There’s just not an equivalent in relational or other databases:
    That’s dealt with the first of our two problems outlined before, i.e. metadata. Let’s move on to identity. In my (and a lot of people’s) opinion identity wasn’t really considered carefully enough at the beginning of the internet. I don’t blame them. It would have been hard to predict phishing, fake accounts and identity theft back in 1989.
    I put <some_abstract_id> in the RDF example, above, on purpose. Mainly because RDF needs a “subject” for all the triples to refer to, but also because I wanted to discuss how hard it is to think of what that id/subject would be. In RDF terms it should be a IRI as it should point to a uniquely identifiable thing on the Internet, but what should we use? I have quite a lot of identities on the internet. On LinkedIn, I’m https://www.linkedin.com/in/mnjwharton/ . On Twitter, I’m https://twitter.com/iotics_mark . In Geoff’s case, what identity should we use? He has two in my contrived example: “1234” and “4321” - neither of which have any meaning outside the context of their respective databases. I certainly can’t use them as my <some_abstract_id> as they’re not URLs or URIs.
    To solve this problem, who we gonna call? Not Ghostbusters, but the W3C and their Decentralised Identifiers (DIDs). Caveat first. This isn’t the only way to solve identity problems, just my favourite. The first thing to know about DIDs is that they are self-sovereign. This is important in a decentralised environment like the internet. There is (rightly) no place I can go to set up my “internet id”. I can set up my own id, host it anywhere on the internet and, when it’s resolved (looked up in a database, for example), it will show you a short document. Here’s an example from the W3C spec - first the DID itself:
    did:example:123456789abcdefghi
    And then the document to which it points
    Example DID document I agree that it looks pretty complicated, but it isn’t really for regular humans. The important thing is that I can prove, cryptographically, that this id is mine as it has my public key and I can add proofs to it that only I can make (because only I have my private key). (Note for tech nerds. The document is in JSON-LD - the JSON serialisation of RDF). These documents are stored in a Registry (which in itself should be decentralised, such as a blockchain or a decentralised file system such as IPFS).
    Let’s get back to Geoff. The <some_abstract_id> I put in earlier, can now be replaced by Geoff’s. I’ll make one up for him
    did:madeup:9e0ff
     Then we can use an excellently-named technique called Identity “smooshing” i.e. we can link all the other identities of Geoff that we know about using some triples. There are various we could pick.
    foaf:nick - someone’s nickname
    skos:altLabel - an alternative label for something
    But I think that gist:isIdentifiedBy from the Semantic Arts’ Gist ontology is the best idea for Geoff.  gist:isIdentifiedBy describes itself as:
    Perfect! Especially the bit about being able to have more than one.
    Putting all the bits together, using decentralised identifiers, semantics and linked data we can have the self-sovereign id for Geoff linked to all his data and the other identifiers in other systems - all in one place and all self-contained and self-describing
    Full RDF version of Geoff with links to other systems information Tying all the threads together to conclude. TBL’s vision of the World Wide Web and the Semantic Web were, and remain, decentralised to their core. Clue’s in the name. “Web”. His original diagram had all the pieces (except for Id) - information management in distributed (now decentralised) systems using graphs. TBL even tried to rename the WWW the Giant Global Graph (GGG) to emphasise this. Now most people just bundle these technologies as Web 3.0.
    We also managed to “solve for Geoff” - the diverse, customer-in-a-vulnerable-situation use case - by allowing all the parties to keep data about Geoff:
    In their own systems Using their own identifiers In an interoperable way (i.e. in RDF triples) so they can share some/all of it. I think of this not as a standard as such, but a standard approach. It’s like we all agree about the alphabet to use, but we don’t care so much about what you write using it.
    Decentralised problems call for decentralised solutions and the mix of Semantics and Decentralisation allow everyone to keep control of their data about Geoff and to manage their part of the service mix, but also to share it with others in an interoperable way. At IOTICS, we call it “Digital Cooperation”.
    I don’t really care what you call it, Semantics and Decentralisation go together like Fish and Chips, Beans on Toast, Strawberries and Cream. Strawberries are nice; cream is nice. But, together, they are more than the sum of their parts.
     
     
    Read more...
    Here are a few photos from the live launch of the Apollo Protocol white paper on 25 October.
    The launch also included news of the InnovateUK funded programme of Hacks we are running over the next few months.  Read on for links and more info:

    The team who wrote the white paper: @Su Butcher, @Henry Fenby-Taylor, Adam Young (techUK), @Paul Surin, @Rab Scott, @Neil Thompson, @Jonathan Eyre, @Rick Hartwig. 

    Fergus Harradence (BEIS) endorsing the Apollo Protocol in front of a slide showing the stakeholders.

    John Patsavellas cautioning us about data for the sake of it.

    Richard Robinson describing the Construction Leadership Council's vision for the future and endorsing the Apollo Protocol.

    Austin Cook of BAE Systems gave a fascinating insight into the limitations even such an advanced manufacturer is grappling with, and describing their Factory of The Future project.

    A ripple of amusement runs through the audience when Miranda Sharp mentions that "everyone thinks we should share data but not their particular data at this particular time".

    @David Wagg has just been appointed one of the leads on the infrastructure arm of the pan-UK Turing Research and Innovation Cluster.

    @Jonathan Eyre and @Neil Thompson announce the InnovateUK funded Hack sessions. Read more about the first ones and sign up here: https://engx.theiet.org/b/blogs/posts/digital-twins-apollo-protocol-value-hack

    Maria Shiao asks a question from the floor.

    Chair @Rab Scott kept us entertained!

    Post event drinks and networking
    Want to watch the live launch? You can do so here: 
    Want to come to the Hacks? Find out more here: https://engx.theiet.org/b/blogs/posts/digital-twins-apollo-protocol-value-hack
    Want to keep in touch? Join the Apollo Protocol Network here: 
     
     








    Read more...
    100%Open is working with a Net Zero Buildings (domestic & non-domestic sites) accelerator to research the key brands and the propositions working in this area.
    If you work in a closely related industry with responsibility for decarbonisation, renewables, energy efficiency and/or net zero buildings’ design, management, products and services and would like to take part in this research, we would love to have you take part and can offer £50 in acknowledgment for the needed time investment of a 45 minute interview this November, 2022. 
    There is a limited time on this offer, so if you or anyone you know is interested, send us an email to hello@100open.com today!
    Read more...
    The department of Computer Science in Innsbruck (Austria) is currently working togehter with the Aston University in the UK. Togehter they elaborated a really interesting survey in the field of DIGITAL TWINS. The underlying question is to what extent companies recognise or have already recognised the potential of digital twins for themselves and are therefore already working with them. This may be the case in product development, but can also have other aspects, e.g. a smart shop floor, virtual factory, etc.
    Be part of it and help shaping future reseach agendas in digital twin engineering.
    It won't take long and it is completely anonymously.
    https://umfrage.uibk.ac.at/limesurvey/allgemein/index.php/273288?lang=en
     
     
    Read more...
    Contribution of SPA to the Anglian Water's digital twin roadmap 
    Anglian Water has a 15-year roadmap for achieving an enterprise-wide Digital Twin that can ultimately form part of the long-term vision for a UK National Digital Twin.  
     
    The very first steps of the roadmap, which defined how Anglian Water's digital twin can meet the regulated outcomes as prescribed by Ofwat, are complete.  Those activities were followed by a proof of concept that showed how digital twin approaches could improve energy management and workforce efficiency within the example context of a pumping station. 
    SPA is now at the next stage of the roadmap, where we have rolled out a first version of our applications and data architecture.  Our focus and biggest challenge are building an extendable platform that can support the future Anglian Water's Digital Twin ambitions, whilst value is driven for SPA in the short term. 
    Implementing our applications and data strategy is a substantial transformational change, which requires a cultural shift towards a product approach, and a concerted effort to align the architecture across people, culture, technology, and data.
    Product approach  
    Anglian Water has been investigating the merits of product lifecycle management approaches to introduce a 'product mindset' to the business, ensuring that investments promote repeatability and robust standards.  
    In this context, SPA has been working with Anglian Water to develop an asset information model, process blocks and product data templates. Those elements provide the foundation for a product-based approach around digital assets that can be reused across the AW enterprise.  
     
    SPA has also developed various graphical interfaces through BIM, GIS and control and automation platforms, which have provided a mechanism to build products to allow user interaction with the digital assets. These graphical interfaces provide the stakeholders with powerful ways of asking "what if" questions facing different organisation objectives and asset functions. 
     
    Application architecture
    SPA acts as a 'critical friend' to the Anglian Water enterprise. All SPA applications are continuously assessed regarding extendibility and compatibility within the Anglian Water applications landscape.  
    That said, SPA strives to use technologies recently approved or historically used by Anglian Water. That approach brings significant benefits in reducing procurement costs and ensuring continuity.  
    On the other hand, the focus on a conscious coupling of delivery and the longer-term roadmap for Anglian Water does mean that there is initially a greater level of complexity. However, that approach is already showing benefits in the velocity of our trajectory with the ability to reuse existing patterns and gain consensus and goodwill amongst the wider change community.
    Data architecture
    From a data perspective Anglian Water, with its strong Digital Twin ambitions, is in the process of maturing the curation and management of data. That involves activities in several areas such as data accuracy, integrity, completeness and timeliness. We are working with our Anglian Water colleagues to enable these through: 
    Aligning our data templates with key Anglian Water contextual data, such as Asset identification, Asset location and functional location codes. That is to ensure data continuity within the various systems through using the same identification system as the one that Anglian Water has developed.  Highlighting where current technologies cannot securely and safely house the core data required for the Digital Twin.  Ensuring that data can flow from systems of record, and sensors, into the Digital Twin in a manner that enables timely decisions to be made.  Ensuring that data captured within the SPA design and build phases can be held within the Anglian Water IT systems post-handover.  People and culture
    There persists a view that technology such as digital twins will drive new value on their own. We believe however that value will be realised from business transformation enabled by digital twins. This will require a cultural shift in a very traditional industry.  
    The Strategic Pipeline Alliance was built on the principles of enabling data-informed decision-making. Valuing "data as an asset" is a new concept for Anglian Water.  
    Although there are robust governance processes within the organisation, an approach of open early communication has been taken to provide a "no surprises" philosophy. That approach ensures appropriate stakeholders are engaged as soon as is practical after identification and brought into the philosophy of our journey. 
    Education and storytelling are fundamental to help guide and draw the Anglian Water organisation along this transformational journey. Therefore, we are working closely with Anglian Water communities of practice to understand the required business capabilities and the current maturity of the Anglian Water organisation in these areas. For example, the Anglian Water and SPA architecture share a leading-edge enterprise architecture model, to ensure consistency and a frictionless handover. 
    A 'core delivery team' has been formed to work with SPA and Anglian Water stakeholders. The team ensures alignment from both a technical and cultural perspective, supporting the development of digital assets. Subject matter experts support the core team and are brought in as required to help deliver specialist services, such as penetration testing, installation of sensors and Operational Technology, or creation of data driven hydraulic models. 
       
    What is certainly clear is that we still have a lot to learn, however by following good architectural best practice and placing people at the heart of everything we do, we have put in place a good foundation from which to build. 
    If you would like to know more, please get in touch through the DT Hub.
     


    Read more...
    Join us to celebrate the launch of the Infrastructure Client Group Annual Digital Benchmarking Report 2021 on 15 June 2022 at 9:00 BST by REGISTERING HERE.
    The ICG Report, powered by the Smart Infrastructure Index, surveys asset owners and operators who jointly represent over £385bn worth of capital assets and over 40% of the national infrastructure and construction pipeline.
    After Mark Enzer, former Head of the National Digital Twin Programme, Centre for Digital Built Britain, introduces the report, Andy Moulds and Anna Bowskill, Mott MacDonald, will uncover the results of the latest research into the state of the nation for digital adoption and maturity.
    This will be followed by a panel of industry thoughts leaders and practitioners, chaired by Melissa Zanocco, Co-Chair DTHub Community Council, sharing their views and best practice case studies from the ICG Digital Transformation Task Group and Project 13 Adopters including:
    Karen Alford, Environment Agency – skills Matt Edwards, Anglian Water – digital twins Sarah Hayes, CReDo – Climate Resilience Demonstrator digital twin Neil Picthall, Sellafield – common data environments Matt Webb, UK Power Networks – digital operating models Will Varah, Infrastructure & Projects Authority – Transforming Infrastructure Performance: Roadmap to 2030 REGISTER to find out how much progress has been made at a time when digital transformation is a critical enabler for solving the global, systemic challenges facing the planet.
    For any questions or issues, please contact Melissa Zanocco: melissa.zanocco@ice.org.uk 
    Please note: We plan to make a recording of the event available. Please note that third parties, including other delegates may also take pictures or record videos and audio and process the same in a variety of ways, including by posting content across the web and social media platforms.
    Read more...
    The bigger and more complicated the engineering problem, the more likely it is to have a digital twin. Firms that build rockets, planes and ships, for example, have been creating digital twins since the early 2000s, seeing significant operational efficiencies and cost-savings as a result. To date, however, few firms have been able to realise the full potential of this technology by using it to develop new value- added services for their customers. We have developed a framework designed to help scale the value of digital twins beyond operational efficiency towards new revenue streams.
    In spite of the hype surrounding digital twins, there is little guidance for executives to help them make sense of the business opportunities the technology presents, beyond cost savings and operational efficiencies. Many businesses are keen to get a greater return on their digital twins’ investment by capitalising on the innovation – and revenue generating - opportunities that may arise from a deeper understanding of how customers use their products. However, because very few firms are making significant progress in this regard, there is no blueprint to follow. New business models are evolving but the business opportunities for suppliers, technology partners and end-users is yet to be fully documented.
    Most businesses will be familiar with the business model canvas as a tool to identify current and future business model opportunities. Our ‘Four Values’ (4Vs) framework for digital twins is a more concise version of the tool, developed to help executives better understand potential new business models. It was designed from a literature review and validated and modified through industry interviews. The 4Vs framework covers: the value proposition for the product or service being offered, the value architecture or the infrastructure that the firm creates and maintains in order to generate sustainable revenues; the value network representing the firm’s infrastructure and network of partners needed to create value and to maintain good customer relationships; and value finance such as cost and revenue structures.
    Value proposition
    The value proposition describes how an organisation creates value for itself, its customers and other stakeholders such as supply chain partners. It defines the products and services offered, customer value (both for customers and other businesses) as well as the ownership structure. Examples of digital twin-based services include condition monitoring, visualization, analytics, data selling, training, data aggregation and lifespan extension. Examples of customer value in this context might include: decision support, personalisation, process optimisation and transparency, customer/operator experience and training.
    Value architecture
    The value architecture describes how the business model is structured. It has 5 elements: 1. Value control is the approach an organisation takes to control value in the ecosystem. For example, does it exist solely within its own ecosystem of digital twin services or does it intersect with other ecosystems? 2. Value delivery describes how the digital twins are delivered, are they centralised, decentralised or hybrid? It also seeks to understand any barriers that may prevent the delivery of digital twins to customers. 3. Interactions refers to the method of customer interaction with the digital twin. Common examples of interaction include desktop or mobile app, virtual reality and augmented reality interactions. 4. Data collection underlies the digital twin value proposition and can be a combination of the following: sensor based and/or supplied/purchased data. 5. Boundary resources are the resources made available to enhance network effects and scale of digital twin services. This typically comprises of the following: APIs, hackathons, software development toolkits and forums.
    Value network
    The value network is the understanding of interorganisational connections and collaborations between a network of parties, organisations and stakeholders. In the context of digital twin services, this is a given as the delivery mechanism relies on multiple organisations, technological infrastructure and stakeholders.
    Value finance
    This defines how organisations approach costing, pricing methods and revenue structure for digital twins. Digital twin revenue model most commonly refers to outcomes-based revenue streams and data-driven revenue models. Digital twin pricing models include, for example, freemium and premium, subscription models, value-based pricing and outcome-based pricing models. Four types of digital twin business models were identified from extensive interviews with middle and top management on services offered by digital twins, we identified four different types of business models and applied our 4Vs approach to understand how those models are configured and how they generate value.
    Brokers
    These were all found in information, data and system services industries. Their value proposition is to provide a data marketplace that orchestrates the different players in the ecosystem and provides anonymised performance data from, for example, vehicle engines or heating systems for buildings. Value Finance consists of recurring monthly revenues levied through a platform which itself takes a fee and allocates the rest according to the partnership arrangements.
    Maintenance-optimisers
    This business model is prevalent in the world of complex assets, such as chemical processing plants and buildings. Its value proposition lies in providing additional insights to the customer on the maintenance of their assets to provide just-in-time services. What-if analysis and scenario planning are used to augment the services provided with the physical asset that is sold. Its Value Architecture is both open and closed, as these firms play in ecosystems but also create their own. They control the supply chain, how they design the asset, how they test it and deliver it. Its Value Network consists of strategic partners in process modelling, 3D visualisation, CAD, infrastructure and telecommunications. Value Finance includes software and services which provide a good margin within a subscription model. Clients are more likely to take add-on services that show significant cost savings.
    Uptime assurers
    This business model tends to be found in the transport sector, where it’s important to maximise the uptime of the aircraft, train or vehicle. The value proposition centres on keeping these vehicles operational, either through predictive maintenance for vehicle/ aircraft fleet management and, in the case of HGVs, route optimisation. Its Value Architecture is transitioning from closed to open ecosystems. There are fewer lock- in solutions as customers increasingly want an ecosystems approach. Typically, it is distributors, head offices and workshops that interact with the digital twin rather than the end-customer. The Value Network is open at the design and assembly lifecycle stages but becomes closed during sustainment phases. For direct customers digital twins are built in-house and are therefore less reliant on third-party solutions. Its Value Finance is focused on customers paying a fee to maximise the uptime of the vehicle or aircraft, guaranteeing, for example, access five days a week between certain hours.
    Mission assurers
    This business model focuses on delivering the necessary outcome to the customers. It tends to be found with government clients in the defense and aerospace sector. Value propositions are centered around improving efficacy of support and maintenance/ operator insight and guaranteeing mission success or completion. These business models suffer from a complex landscape of ownership for integrators of systems as much of the data does not make it to sustainment stages. Its Value Architecture is designed to deliver a series of digital threads in a decentralised manner. Immersive technologies are used for training purposes or improved operator experience. Its Value Network is more closed than open as these industries focus on critical missions of highly secure assets. Therefore, service providers are more security minded and careful of relying on third-party platforms for digital twin services. Semi-open architecture is used to connect to different hierarchies of digital twins/digital threads. Value Finance revealed that existing pricing models, contracts and commercial models are not yet necessarily mature enough to transition into platform-based revenue models. Insights as a service is a future direction but challenging at the moment, with the market not yet mature for outcome-based pricing.
    For B2B service-providers who are looking to generate new revenue from their digital twins, it is important to consider how the business model should be configured and identify major barriers to their success. Our research found that the barriers most often cited were cost, cybersecurity, cultural acceptance of the technology, commercial or market needs and, perhaps most significantly, a lack of buy-in from business leaders. Our 4Vs framework has been designed to help those leaders arrive at a better understanding of the business opportunities digital twin services can provide. We hope this will drive innovation and help digital twins realise their full business potential.
    Now for a small request to the reader that has reached this far, we are looking to scale these research findings in our mass survey across industry on the business models of digital twins. If your organisation is planning to implement or has already started its journey of transformation with digital twins please help support our study by participating in our survey. Survey remains fully anonymised and all our findings will be shared with the DTHub community in an executive summary by the end of the year.
    Link to participate in the survey study https://cambridge.eu.qualtrics.com/jfe/form/SV_0PXRkrDsXwtCnXg 
    Read more...
    Transforming an entire industry is, at its core, a call to action for all industry stakeholders to collaborate and change. The National Digital Twin programme (NDTp) aims to do just that, enabling a national, sector-spanning ecosystem of connected digital twins to support people, the economy, and the natural environment for generations to come. 
    But to achieve these ambitious impacts, a great deal of change needs to occur. So, to provide clear rationale for why potential activities or interventions should be undertaken and why they are expected to work, Mott MacDonald has worked with CDBB to develop a Theory of Change (ToC) and a Benefits Realisation Framework (BRF) to represent the logical flow from change instigators (i.e., levers) to overall benefits and impacts. The ToC and BRF are expected to provide future leaders and policymakers with a clear understanding of the drivers of change and the actors involved to create an ecosystem of connected digital twins. 
     
    Components of the Theory of Change 
    Within the ToC, we outline several key components - actors, levers, outputs, outcomes, impacts, and interconnected enablers. As a national programme uniting the built environment through a complex system of systems, it is essential that multiple actors collaborate, including asset owners and operators, businesses, government, academia, regulators, financial entities, and civil society. These actors need to share a common determination to move the needle towards better information management by utilising a combination of interconnected levers to kickstart the change: financial incentives, mandates and legislation as well as innovation.  
    We see that pulling these three levers is likely to trigger tangible change pathways (i.e., the routes in which change takes place), manifested through the ToC outputs and intermediate outcomes, leading to the creation of institutional and behavioural changes, including organisations taking steps to improve their information management maturity and exploring cross-sector, connected digital twins. Ultimately, we consider these change pathways to lead to the long-term intended impact of the NDTp, achieving benefits for society, the economy, businesses, and the environment. 
    Underpinning and supporting the levers and change pathways are the enablers. We see these as positive market conditions or initiatives and are key in implementing and accelerating the change. They span having a unifying NDTp strategy, vision and roadmap, empowering leadership and governance, leveraging communication and communities, building industry capacity, and adopting a socio-technical approach to change.  
     
    The five levels of the Theory of Change 
    We intend for the ToC to outline how change can occur over five distinct levels: individual, organisational, sectoral, national, and international. The individual level involves training and upskilling of individuals from school students to experienced professionals, so that individuals can be active in organisations to drive and own the change. Our previous work with CDBB focused on the Skills and Competency Framework to raise awareness of the skills and roles needed to deliver a National Digital Twin in alignment with the Information Management Framework (IMF). 
    At the core of establishing the National Digital Twin is the organisational level, within which it is essential for change to occur so that organisations understand the value of information management and begin to enhance business processes. Broadening out from these two levels sits the sectoral level, where the development of better policies, regulations and governance can further support the change across all levels. Similarly, change at the national level will guide strategic engagement and should encourage further public support. 
    Ultimately, change at these four levels should achieve change at an international level, where the full potential of connected digital twins can be realised. Through the encouragement of international knowledge sharing and by creating interconnected ecosystems, challenges that exist on a global scale such as climate change can be tackled together. 
     
    Benefits Realisation Framework 
    Monitoring and evaluation have been fundamental to the assessment of public sector policy and programme interventions for many years. The potential benefits of the NDTp are significant and far reaching, and we have also developed guidance on how to establish a benefits realisation framework, based on UK best practice including HM Treasury’s Magenta Book, to drive the effective monitoring and evaluation of NDTp benefits across society, the economy, businesses, and the environment. We intend for this to provide high-level guidance to measure and report programme benefits (i.e., results) and track programme progress to the NDTp objectives outlined in the Theory of Change. 
     
    The Gemini Papers 
    Our work in developing the Theory of Change for the National Digital Twin programme has informed one of the recently published Gemini Papers. The Gemini Papers comprise three papers addressing what connected digital twins are, why they are needed, and how to enable an ecosystem of connected digital twins, within which the Theory of Change sits.
    Together, we can facilitate the change required to build resilience, break down sectoral silos and create better outcomes for all. 
     
    Read more...
    Several Terms such as Digital Ecosystem, Digital Life, Digital World, Digital Earth have been used to describe the growth in technology. Digital twins are contributing to this progress, and it will play a major role in the coming decades. More digital creatures will be added to our environments to ease our life and to reduce harms and dangerous. But can we trust those things? Please join the Gemini call on the 29th of March; Reliability ontology was developed to model hardware faults, software errors, autonomy/operation mistakes, and inaccuracy in control. These different types of problems are mapped into different failure modes. The purpose of the reliability ontology is to predict, detect, and diagnose problems, then make  recommendations or give some explanations to the human-in-the-loop. I will discuss about these topics and will describe how ontology and digital twins are used as a tool to increase the trust in robots. 
    Trust in the reliability and resilience of autonomous systems is paramount to their continued growth, as well as their safe and effective utilisation.    A recent global review into aviation regulation for BVLOS (Beyond Visual Line of Sight) with UAVs (Unmanned Aerial Vehicles) by the United States Congressional Research Office, highlighted that run-time safety and reliability is a key obstacle in BVLOS missions in all of the twelve European Union countries reviewed . A more recent study also highlighted that within a survey of 1500 commercial UAV operators better solutions towards reliability and certification remain a priority within unmanned aerial systems. Within the aviation and automotive markets there has been significant investment in diagnostics and prognostics for intelligent health management to support improvements in safety and enabling capability for autonomous functions e.g. autopilots, engine health management etc.
    The safety record in aviation has significantly improved over the last two decades thanks to advancements in the health management of these critical systems.     In comparison, although the automotive sector has decades of data from design, road testing and commercial usage of their products they still have not addressed significant safety concerns after an investment of over $100 Billion in autonomous vehicle research.  Autonomous robotics face similar, and also distinct, challenges to these sectors. For example, there is a significant market for deploying robots into harsh and dynamic environments e.g. subsea, nuclear, space etc which present significant risks along with the added complexity of more typical commercial and operational constraints in terms of cost, power, communication etc which also apply. In comparison, traditional commercial electronic products in the EEA (European Economic Area) have a CE marking, Conformité Européenne, a certification mark that indicates conformity with health, safety, and environmental protection standards for products sold within the EEA. At present, there is no similar means of certification for autonomous systems.    
    Due to this need, standards are being created to support the future requirements of verification and validation of robotic systems. For example, the BSI standards committee on Robots and Robotic Devices and IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems (including P7009 standard) are being developed to support safety and trust in robotic systems. However, autonomous systems require a new form of certification due to their independent operation in dynamic environments. This is vital to ensure successful and safe interactions with people, infrastructure and other systems. In a perfect world, industrial robotics would be all-knowing.  With sensors, communication systems and computing power the robot could predict every hazard and avoid all risks. However, until a wholly omniscient autonomous platform is a reality, there will be one burning question for autonomous system developers, regulators and the public - How safe is safe enough? Certification infers that a product or system complies with legal relevant regulations which might slightly differ in nature from technical or scientific testing. The former would involve external review, typically carried out by some regulators to provide guidance on the proving of compliance, while the latter usually refers to the reliability of the system. Once a system is certified, it does not guarantee it is safe – it just guarantees that, legally, it can be considered “safe enough” and that the risk is considered acceptable.
    There are many standards that might be deemed relevant by regulators for robotics systems. From general safety standards, such as ISO 61508, through domain specific standards such as ISO 10218 (industrial robots), ISO 15066 (collaborative robots), or RTCA DO-178B/C (aerospace), and even ethical aspects (BS8611).  However, none of those standards address autonomy, particularly full autonomy wherein systems take crucial, often safety critical, decisions on their own. Therefore, based on the aforementioned challenges and state of the art, there is a clear need for advanced data analysis methods and a system level approach that enables self-certification for systems that are autonomous, semi or fully, and encompasses their advanced software and hardware components, and interactions with the surrounding environment.     In the context of certification, there is a technical and regulator need to be able to verify the run-time safety and certification of autonomous systems. To achieve this in dynamic real-time operations we propose an approach utilising a novel modelling paradigm to support run-time diagnosis and prognosis of autonomous systems based on a powerful representational formalism that is extendible to include more semantics to model different components, infrastructure and environmental parameters.
    To evaluate the performance of this approach and the new modelling paradigm we integrated our system with the Robotics Operating System (ROS) running on Husky (a robot platform from Clearpath) and other ROS components such as SLAM (Simultaneous Localization and Mapping) and ROSPlan-PDDL (ROS Planning Domain Definition Language). The system was then demonstrated within an industry informed confined space mission for an offshore substation. In addition, a digital twin was utilized to communicate with the system and to analysis the system’s outcome.
    Read more...
    Intelligent infrastructure is a new trend that aims to create a work of connected physical and digital objects together in industrial domains via a complex digital architecture which utilises different advanced technologies. A core element to this is the intelligent and autonomous component. Two-tiers intelligence is a novel new concept for coupling machine learning algorithms with knowledge bases. The lack of availability of prior knowledge in dynamic scenarios is without doubt a major barrier for scalable machine intelligence. The interaction between the two tiers is based on the concept that when knowledge is not readily available at the top tier, the knowledge base tier, more knowledge cab be extracted from the bottom tier, which has access to trained models from machine learning algorithms.
    It has been reported that the need for intelligent autonomous systems – based on AI and ML – operating in real-world conditions to radically improve their resilience and capability to recover from damage. It has been expressed the view that there is a prospect for AI and ML to solve many of those problems. A claim has been made that a balanced view of intelligent systems by understanding the positive and negative merits will have impact in the way they are deployed, applied, and regulated in real-world environments.  A modelling paradigm for online diagnostics and prognostics for autonomous systems is presented. A model for the autonomous system being diagnosed is designed using a logic-based formalism, the symbolic approach. The model supports the run-time ability to verify that the autonomous system is safe and reliable for operation within a dynamic environment. However, during the work we identified some areas where knowledge for the purpose of safety and reliability is not readily available. This has been a main motive to integrate ML algorithms with the ontology.
    After decades of significant research, two approaches to modelling cognition and intelligence have been investigated and studied: Networks (or Connectionism) and Symbolic Systems. The two approaches attempt to mimic the human brain (neuroscience) and mind (logic, language, and philosophy). While the Connectionism approach considers learning as the main cognitive activity, the Symbolic Systems are broader, they also look at reasoning (for problem solving and decision making) as the main cognitive activity besides learning. Although, learning isn’t the focus of Symbolic Systems, powerful – but limited – methods were applied, such as ID3 (define) and its different variations and versions. Furthermore, the Connectionism approach is concerned with data while Symbolic Systems are concerned with knowledge.
    Psychologists have developed non-computational theories of learning that have been the source of inspiration for both approaches. Psychologists have also differentiated between different types of learning (such as learning from experience, by examples, or a combination of both). In addition, unlike in animals (it is difficult to test intelligence in non-human creatures), human psychologists have also produced methods to test human intelligence. Mathematicians have also contributed statistical methods and probabilistic models to predict behaviour or to rank a trend. The subject of Machine Learning (ML) is the bag for all algorithms used to mine data in the hope that we can learn something useful from the data, which is usually distributed, structured or unstructured, and of significant size.  Although there are several articles on the differences and similarities between Artificial Intelligence and Machine learning, and articles on the importance of the two schools, there are no real or practical attempts that have been reported in the literature to practically use or combine the two approaches together. Therefore, this is an attempt to settle the ongoing conflicts between the two existing thoughts for modelling cognition and intelligence in humans. We argue that two-tiers intelligence is a mandate for machine intelligence as it is the case for human. Animals, on the other hand, have one-tier intelligence, which is the intrinsic and the static know-how. The harmony between the two tiers can be viewed from different angles, however they complement each other, and both are mandatory for human intelligence and hence machine intelligence.
    The lack of availability of prior knowledge in dynamic complex systems of is without doubt a major barrier to scalable machine intelligence. Several advanced technologies are used to control, manipulate, and utilise all parts whether software, hardware, mobile assets such as robots, or even infrastructure assets such as wind turbines. The two-tiers intelligence approach will enhance the learning and knowledge sharing process in a setup that heavily relies on some sort of symbiotic relationships between its parts and the human operator.
    Read more...
    A digital twin is a digital representation of something that exists in the physical world (be it a building, a factory, a power plant, or a city) and, in addition, can be dynamically linked to the real thing through the use of sensors that collect real-time data. This dynamic link to the real thing differentiates digital twins from the digital models created by BIM software—enhancing those models with live operational data.
    Since a digital twin is a dynamic digital reflection of its physical self, it possesses operational and behavioral awareness. This enables the digital twin to be used in countless ways, such as tracking construction progress, monitoring operations, diagnosing problems, simulating performance, and optimizing processes.
    Structured data requirements from the investor are crucial for the development of a digital twin. Currently project teams spend a lot of time putting data into files that unfortunately isn’t useful during the project development or ultimately to the owner; sometimes it is wrong, at other times too little, or in other cases an overload of unnecessary data. At the handover phase, unstructured data can leave owner/operators with siloed data and systems, inaccurate information, and poor insight into the performance of a facility. Data standards such as ISO 19650 directly target this problem that at a simple level require an appreciation of the asset data lifecycle that starts with defining the need in order to allow for correct data preparation.

    Implementing a project CDE helps ensure that the prepared data and information is managed and flows easily between various teams and project phases, through to completion and handover. An integrated connected data environment can subsequently leverage this approved project data alongside other asset information sources to deliver the foundation of a valuable useable digital twin.
    To develop this connected digital twin, investors and their supply chains can appear to be presented with two choices: an off-the-shelf proprietary solution tied to one vendor or the prospect of building a one-off solution with risk of long term support and maintenance challenges. However, this binary perspective is not the case if industry platforms and readily available existing integrations are leveraged to create a flexible custom digital twin.
    Autodesk has provided its customer base with the solutions to develop custom data integrations over many years, commencing with a reliable common data environment solution. Many of these project CDEs have subsequently migrated to become functional and beneficial digital twins because of a structured data foundation. Using industry standards, open APIs and a plethora of partner integrations, Autodesk’s Forge Platform, Construction Cloud and recently Tandem enable customers to build the digital twin they need without fear of near term obsolescence or over commitment to one technology approach. Furthermore partnerships with key technology providers such as ESRI and Archibus extend solution options as well as enhancing long term confidence in any developed digital twin.

    The promises of digital twins are certainly alluring. Data-rich digital twins have the potential to transform asset management and operations, providing owners new insights to inform their decision-making and planning. Although digital twin technologies and industry practice are still in their youth, it is clear that the ultimate success of digital twins relies on connected, common, and structured data sources based on current information management standards, coupled with adoption of flexible technology platforms that permit modification, enhancement or component exchange as the digital twin evolves, instead of committing up front to one data standard or solution strategy.
     
    Read more...
    Introduction 
    The Strategic Pipeline Alliance (SPA) was established to deliver a major part of Anglian Water’s ‘Water Resources Management Plan’ to safeguard against the potential future impacts of water scarcity, climate change and growth, whilst protecting and enhancing the environment. The SPA was established to deliver up to 500km of large diameter interconnecting transmission pipelines, associated assets and a Digital Twin.  
    Digital transformation was identified early in the programme as a core foundational requirement for the alliance to run its ‘business’ effectively and efficiently. It will take Anglian Water through a digital transformation in the creation of a smart water system, using a geospatial information system as a core component of the common data environment (CDE), enabling collaboration and visualisation in this Project 13 Enterprise. 
     
    Digital Transformation 
    Our geospatial information system (GIS) described is just one part of a wider digital transformation approach that SPA has been developing and is a step change in the way that Anglian Water uses spatial data to collaborate and make key decisions, with net savings of £1m identified.  
    When the newly formed SPA went from an office-based organisation to a home-based organisation overnight due to COVID19, standing up an effective central GIS system was critical to maintain the ability to work efficiently, by providing a common window to the complex data environment to all users. With 500km of land parcels and around 5000 stakeholders to liaise with, the GIS system provided the central data repository as well as landowner and stakeholder relationship management. The mobile device applications, land management system, ground investigation solution and ecology mapping processes all enabled SPA to hit its key consenting and EIA (Environmental Impact Assessment) application dates.   
    We got the Alliance in place and fully operative within six months and the SPA GIS has helped fast-track a key SPA goal of increasing automation throughout the project lifecycle; automation tools such as FME (Feature Manipulation Engine), Python and Model Builder have been widely adopted, driving efficiencies.   
    The SPA GIS analyses and visually displays geographically referenced information. It uses data that is attached to a unique location and enables users to collaborate and visualise near real time information. Digital optimisation will provide enormous value and efficiencies in engineering, production, and operational costs of the smart water system. Having a single repository of up-to-date core project geospatial deliverables and information has reduced risk and enabled domain experts and our supply chain to interact with data efficiently.  
     
    Enterprise Architecture 
    Spending quality time up front in developing an enterprise architecture and data model allowed us to develop a CDE based around GIS. A cost model was approved for the full five years, and the platform was successfully rolled out. 
    The Enterprise Architecture model was created in a repository linked to Anglian Water’s enterprise. This included mapping out the technology and data integration requirements, as well as the full end-to-end business processes. The result was a consistent, interoperable solution stack that could be used by all alliance partners, avoiding costly duplication. GIS was identified as a key method of integrating data from a wide range of different sources, helping to improve access across the alliance to single version of the truth and improving confidence in data quality. In addition, a fully attributed spatial data model was developed representing the physical assets. This will help support future operations and maintenance use cases that monitor asset performance. 
     
    Benefits 
    The use of our GIS system is enabling SPA to meet its obligations around planning applications and obtaining landowner consent to survey, inspect and construct the strategic pipeline. Hundreds of Gb of data had to be collected, analysed, and managed to create our submissions.  
    The SPA GIS provides secure, consistent, and rapid access to large volumes of geospatial data in a single repository. Using a common ‘web-centric’ application, the solution enables teams to cooperate on location-based data, ensuring its 700+ users can access current and accurate information. The intuitive interface, combined with unlimited user access, has enabled the Alliance to rapidly scale without restriction.  We have also enabled the functionality for desktop software (ESRI ArcPro, QGIS, FME, AutoDesk CAD and Civil3D) to connect to the geodatabase to allow specialist users to work with the data in the managed, controlled environment, including our supply chain partners. 
    The integration of SPA Land Management and SPA GIS in one platform has brought advantages to stakeholder relationship management by enabling engagement to be reviewed spatially.  
    SPA’s integrated geospatial digital system has been the go-to resource for the diverse and complex teams. The use of our GIS system has been used to extensively engage with the wider Anglian Water operational teams, enabling greater collaboration and understanding of the complex system. The GIS system has, in part, enabled SPA to remove the need to construct over 100km of pipeline, instead re-using existing assets that have been identified in the GIS solution, contributing to the 63% reduction in forecast capital carbon, compared to the baseline.  
    The SPA Land Management solution incorporates four core areas: land ownership, land access survey management and stakeholder relationship management (developed by SPA) which puts stakeholder and customer engagement at its heart. With 300 unique land access users, traditionally, these areas would be looked after by separate teams, with separate systems which struggle to share data. With the digital tool, land and engagement data can be shared across SPA, creating a single source of truth, mitigating risk across the whole infrastructure programme. This has benefitted our customers, as engagement with them is managed much more effectively. Our customer sentiment surveys show 98% are satisfied with how we are communicating with them.  
    The Enterprise Architecture solution allows for capabilities to be transferred into Anglian Water’s enterprise, and there has been careful consideration around ensuring the value of data collected during the project is retained. SPA is developing blueprints as part of the outputs to enable future Alliances to align with best practices, data, cyber and technology policies. SPA is also focussing on developing the cultural and behavioural aspects with Anglian Water to enable Anglian to be able to accept the technological changes as part of this digital transformation. This is a substantial benefit and enables Anglian Water to continue to work towards its operator of the future ambitions, where digital technologies and human interfaces will delivery higher levels of operational excellence.  
    Read more...
    Article written by :- Ilnaz Ashayeri - University of Wolverhampton | Jack Goulding - University of Wolverhampton
    STELLAR provides new tools and business models to deliver affordable homes across the UK at the point of housing need. This concept centralises and optimises complex design, frame manufacturing and certification within a central 'hub'; where 'spoke' factories engage their expertise through the SME-driven supply chain. This approach originated from the airline industry in the 1950’s, where the rationale of this optimises process and logistic distribution. STELLAR takes this one step further by creating a bespoke offsite ‘hub and spoke’ model which is purposefully designed to deliver maximum efficiency savings and end-product value. This arrangement is demonstrated through three central tenets: 1) 3D 'digital twin' factory planning tool; 2) Parametric modelling tool (to optimise house design); and 3) OSM Should-Cost model (designed specifically for the offsite market). 
     
    STELLAR Digital Twin hub article.pdf
    Read more...
    The energy industry has made impressive strides along the path to net-zero, while undergoing the transition to digitisation. Our next, shared step can be to capitalise on the potential of a more dynamic, joined-up and intelligent view of our entire energy system.
    Great Britain’s energy system is experiencing two fundamental transitions in parallel.
    First, the shift to net zero – something we’ve already made significant strides in. The decarbonisation of our sector is well underway, as is the planning for the changing demands on the sector as other industries also undergo this change in their own efforts to reach net zero. 
    And second, digitisation. New technology and the prevalence of real-time data have already transformed many aspects of the energy industry, and there are a multitude of commercial projects that bring to life the concept of digital twins of specific IT systems.
    An opportunity to come together
    We now have an opportunity ahead of us; to bring these parallel transitions together to create something incalculably more powerful, that has the potential to help us take even greater strides towards net zero.
    This is why we’re launching an industry-wide programme to develop the Virtual Energy System – a digital twin of Great Britain’s entire energy system.
    We recognise it’s an ambitious goal.
    But we also recognise that it could be a central tool, bringing together every individual element of our system to create a collective view which will give us more dynamic intelligence around all aspects of the energy industry.
    The Virtual Energy System will also provide us all with a virtual environment to test, model and make more accurate forecasts – supporting commercial decision-making, while enabling us to understand the consumer impact of changes before we make them.
    This ambition is not out of reach - many elements of the energy industry are already using individual digital twins. The next step on this journey is to work together to find a way to take these digital twins forward, in unison. A way in which we can connect these assets and encourage future development across the entire energy spectrum.
    A tool created by our industry, for everyone
    The key to the Virtual Energy System will be collaboration - this won’t be the ESO’s tool, but a tool available to our entire industry - a tool that we will all be able to tap into and derive learnings from, that will support future innovation and problem solving.
    But we need to start somewhere. We are sharing the concept and setting down the gauntlet. It will only become a reality if it is collaboratively designed and developed by the whole energy industry.
    The ESO has set out its initial thinking on what a roadmap could look like, but we need our best and brightest minds to feed into this to shape its future. We know we won’t always reach a perfect consensus every time, but only through engagement and open collaboration will the full benefits be unlocked.
    What’s next?
    In December we brought the energy industry together with Ofgem and BEIS for a one-day conference. It was an opportunity to explore the proposed programme, and kickstart our feedback and engagement period. From this, we plan to form working groups to begin a deeper dive into the key areas of development that will underpin the entire development journey. To watch back the conference, contribute to our initial stakeholder feedback and view a brief outline on the suggested structure visit our website.
    Get Involved and Hear More
    Join us on Thursday 10th February 1pm-2pm for a brief introduction to our Common Framework Benchmarking Report ahead of its public release, followed by a workshop around the key socio-technical factors which could make up the common framework of the Virtual Energy System. There will be lots of opportunity to discuss and ask questions during the session, it will be an informal session where we can collaborate around the latest ideas.
    Register to attend
     
    You can also join us on the Gemini Call on 8th February for a short introduction before the full session. 
    Read more...
    We are facing a growing challenge in the UK in managing the assets we construct. New structures will need future maintenance and much of our existing infrastructure is ageing and performing beyond its design life and intended capacity. In order to get more out of our existing assets with minimum use of limited resource we need to better understand how they are performing. Climate crisis and extreme weather events bring additional strain to the condition and structural health of assets making assessing their condition increasingly important. There are logistical challenges too – visually inspecting remote and hard to access assets can be expensive and hazardous.
    Many people don’t consider that the Earth’s surface is being continuously scanned. By different satellite sensors, in different directions, day and night. While the proliferation of sensors and satellite technology has fuelled a revolution in the way we can monitor assets, the ideal solution is to use different tools in the engineer’s toolbelt in order to find the right solutions for the right cases.
    We’re used to the ideal of Google Earth, and many people in our sector are learning about the usefulness of maps and geographical information systems (GIS), with many open datasets provided by organisations like Ordnance Survey in the UK. What you see as satellite images on Google Earth are forms of optical data: like taking pictures over time of the Earth’s surface and using our eyes to see the changes (or maybe automating change detection through machine learning…and that’s another point). What many people working in the built environment do not realise is that there is a whole spectrum of other sensors that can show us beyond what our eyes can see.
    Did you know that radar satellites continuously scan the earth, emitting and receiving back radar waves? These satellites do not rely on daylight to image and so we can collect radar measurements day and night, and even through clouds. Using different processing techniques, this data can be used to create 3D digital elevation models, map floods and measure millimetres of movement at the Earth’s surface – all from hundreds of kilometres up in space. And did you know there is free data available to track pollutants, monitor ground changes and track vegetation?
    There is. In huge volumes. Petabytes of data are held in archives which allow us to look backwards in time as well as forwards. With all this opportunity, it can seem a bit daunting on where to get started.
    I have worked in the design, construction and maintenance sectors for over a decade, and I came back to academia to learn about the opportunities of satellite data from the German Aerospace Center and the Satellite Applications Catapult. I spent a PhD’s worth of time retaining in data analysis so that I could combine the latest in data analysis with a civil engineer’s lens to better understand how we can unlock value from this data. I’ll save you the time and give a quick overview of what we can do in industry now, and share some learnings from talented researchers working on a Centre for Digital Built Britain (CDBB) project on satellite monitoring to support digital twin journeys.
    Hope to see you next Tuesday 1st February at the Gemini call for introduction to the topic and some signposting on where you can go to find out more to make the most of such data for your own assets.
    Read more...
    AEC Information Containers based on the ISO21597, the so-called ICDD, are a great way to store data and relations. Widely discussed as a possible structure for any CDE, this standard was made to hand over project data or exchange files of a heterogeneous nature in an open and stable container format and therefore will become the starting point of many digital twins.

    The standard says: A container is a file that shall have an extension ".icdd" and shall comply with ISO/IEC 21320–1, also known as ZIP64.
    Information deliveries are often a combination of drawings, information models, text documents, spreadsheets, photos, videos, audio files, etc. In this case, many scans and point clouds came on top. And while we have all metadata datasets in our system, it is pretty hard to hand this over to the client, that might have another way of handling it. So we have now put all those specific relationships between information elements in those separate documents using links because we believe it will contribute significantly to the value of information delivery. 
    We successfully handed over a retroBIM project from a nuclear facility in Germany. It was 661469018KB. And it was a ZIP; before zipping, it was around 8TB! It has a whole archive back to the 60' it has all models, all point clouds the model was made of, and it has all documents produced from the models too. So, all in all, we have 2338 documents.
    We created an ICDD container that, when represented as an RDF graph (index & links), is composed of 29762 unique entities, 37897 literal values and 147795 triples.
    All these information is now transferred independent form any software application and a great way to start a digital twin from. All sensor and live date can be added the same way we had connected documents with BIM elements. Only difference is that you do not store it in a zip file but rather run it in a graph data base. This way you will not only have the most powerful and fastest twin, but also most future-proof and extendible one you can possible get.
     
    Read more...
    Next week’s Gemini Call will include a presentation by Jack Ostrofsky, Head of Quality and Design at Southern Housing Group and Chair of BIM for Housing Associations.
    BIM for Housing Associations (BIM4HAs) is a client led and client funded initiative set up in 2018 to accelerate the uptake of consistent and open standards-based BIM processes across the Housing Association sector. 
    An urgent priority for this group is building and fire safety, particularly in the context of the development of a Golden Thread of Building Safety Information which is part of the Building Safety Bill which is expected to receive Royal Assent in 2022.
    Understanding of BIM and Digital Twins in the residential housing sector is poor, yet as long-term owner-operators of built assets, housing associations are ideally placed to benefit from the efficiencies of BIM and Digital Twins.
    In June 2021 BIM4HAs published a Toolkit of resources for housing associations aimed at assisting them in the process of adopting ‘Better Information Management’. The toolkit, which is free to use, translates the requirements of the National BIM Framework into accessible language and practical tools for housing associations.
    Jack will describe an example of the challenge to housing associations to use structured data to manage their assets; the transfer of spatial information about buildings which designers and contractors label as ‘plots’, development managers and asset managers in housing associations have their own naming conventions which have evolved in a traditional and disjointed manner. As a result, the metadata links are severed at handover and a great deal of valuable, useable information is lost to the client.
    Jack’s employer Southern Housing Group has developed a spatial hierarchy and property reference numbering system which was published in the BIM4HAs Toolkit in June. 
    The spatial hierarchy and naming system links to commonly understood asset management language and informs Asset Information Requirements that housing associations can use to instruct development and refurbishment projects. This process enables contractors to provide useable metadata to housing associations and will form an essential part of the implementation of a Golden Thread of Building Safety Information. 
    In a further development Southern Housing Group, working with members of the BIM4HAs community, have developed and are implementing an Asset Information Model based on the Gemini Principles and aligned with the other BIM4HAs work. This Model will be published for free, for anyone to use, by BIM4HAs as part of an update to the BIM4HAs Toolkit in February. 
    Please join us on the Gemini Call on 25th January at 10.30 to hear about the spatial hierarchy work and put your questions to Jack.
    Download the Spatial Hierarchy Document and ‘The Business Case for BIM’ Document from the links below. Both are part of the Toolkit.
    The whole Toolkit can be downloaded for free from the National Housing Federation website here: housing.org.uk/BIM4HAs
     
    BIM for Housing Associations Pt1 The Business Case for BIM.pdf SHG Spatial Hierarchy UPRN Procedures.pdf
    Read more...
    In setting up the SPA Enterprise it was acknowledged that BIM principles would drive outperformance in both the project and asset lifecycles, and therefore an early focus ensured that the foundations were in place to enable SPA to maximise benefits from data and information.
    To smooth the integration of our physical assets and the associated data and information produced our enterprise architecture focussed on delivering a solution that would:
    Maximise the benefits from the existing Anglian enterprise. Ensure that data and information would integrate seamlessly with existing Anglian repositories. Easily be transitioned from the project to the asset information model. This approach would not hinder bringing any additional enterprise systems that would benefit Anglian Water but would ensure that any legacy systems were planned for seamless integration, giving a longer-term benefit (blueprint) for other and future Alliances.
    Development of the BIM strategy identified the need for the following BIM tools in line with recommendations in PAS1192-2 (now superseded):
    BIM Execution plan – in response to the EIR (Exchange Information Requirements). Common Data Environment (CDE) – to allow exchange of information within the project team and the wider supply chain eco-system - GIS (Geospatial Information System), BIM360, Azure, SharePoint. Master Information Delivery Plan (MIDP) and Task Information Delivery Plan (TIDP) – to manage delivery of information during a project. Supply chain EIR. Asset Information Model. Naming convention. During the initial period SPA has had to work closely with Anglian Water to ensure that we have the following in place:
    Clear information repositories. Data stewards. Approved data structures. Collaborative communication mechanisms. Appropriate security and authentication checks. Appropriate Governance. Clearly defined and agreed processes. As an early adopter on the Project13 programme (Centre for Digital Built Britain) the relational development of our supply chain eco-system was essential.
    All our suppliers complete a Collaboration Request Form (MS Flow Automate), and a BIM Capability Assessment (MS Flow Automate). We work through the SPA Supplier EIR with all partners to share our information management standards and determine how much we need to work with them to ensure the benefits of BIM are realised.
    Part of this induction is being clear on the expected deliverables and the format of these, and how they can interact with our common data environment. For all suppliers we set up a dedicated folder in our SharePoint and BIM360 environments for all information exchange and should there be a need for the supplier to access GIS or BIM models we assist them from a technological and behavioural perspective.
    We have created an automated OCRA (Originate Check Review Approve) process that SPA end-users use for Quality Assurance (QA) in SharePoint and BIM360. With BIM360 the OCRA workflows functionality is built in, and we can create new, customisable checking procedures at will.
    The CDE storage philosophy of project deliverable information is data driven, utilising file metadata to structure, sort, and search for information. ‘Containerisation’ of information utilising subfolder subsystems is kept minimal thereby facilitating a transparency and consistency in the storage of our information across all projects.
    A Digital Delivery lead was put in place by SPA as the platform owner for BIM 360 supported by a team of BIM Engineers. The setup, configuration and management of the platform is governed by the BIM Execution Plan and the CAD (Computer Aided Design) strategy.
    Throughout the design phase of projects in SPA, the various teams have endeavoured to create, and use coordinated, internally consistent, computable information about the project and provide this information to project stakeholders in the most valuable format possible. Following the statutory process and environmental impact study phases for the initial projects, the project moved towards detailed design with a multi-disciplinary design team. With support from the senior leadership in SPA, the design team have embraced a production-based approach which has entailed the adoption of 3D modelling techniques and BIM workflows.
    Data is transferred from analysis and design applications directly into an integrated model, leveraging 3D modelling techniques to enable clash detection, design visualisation and ‘optioneering’ as part of SPA’s Digital Rehearsal approach. The 3D and 2D information models not only serve as a visual communication tool to convey the infrastructure design to the various teams, statutory bodies, and public stakeholders, but was also a vital tool to inform Anglian Water of the development of the assets they will own and operate. The project team have utilised various BIM and GIS technology to enhance and communicate the various constraints (environmental, legislative, physical, ecological, hydraulic, geotechnical etc.) and complex design effectively to all stakeholders. This has been achieved in many formats utilising various software products throughout the project’s life cycle. This will include the use of a virtual reality (VR) gaming engine and the direct importation of the single integrated 3D tunnelling compound model into the GIS environment.
    This means that design conflicts are identified and rectified before construction drawings are completed and issued. Similarly, 3D simulations help promote safety and avoid costly inefficiency by identifying potential issues and mitigating against them in advance.
    It is estimated that setting up this framework will generate at least £1723k net savings over the project period using BIM. This is estimated by the reduction in individual time saved by designers, as well as project time saved.
    It should be noted that there are many non-financial benefits that have also been identified including benefits in safety (better identification of safety changes), to the wellbeing of our staff (reduced driving as collaboration in the model can be remote), and to the environment (reduced Carbon as less miles driven to meetings). There will also be Operational (Opex) savings because of the way that we collate, capture, manage and re-use data within the asset information model. These operational cost savings are yet to be quantified. There are also non-quantifiable benefits expected from a reduction in rework and prolongation.
    In conclusion the introduction of BIM techniques has greatly benefitted the Alliance and will continue to do so throughout the project and asset lifecycle.
    Read more...
    The climate emergency and the transition to a net zero economy means businesses, governments and individuals need access to new information to ensure that we can mitigate and adapt to the effects of environmental and climate change. Environmental Intelligence will be a critical tool in tackling the climate and ecological crises, and will support us as we move towards more sustainable interaction with the natural environment, and delivery of net zero.
    Environmental Intelligence is a fast-developing new field that brings together Environmental data and knowledge with Artificial Intelligence to provide the meaningful insight to inform decision-making, improved risk management, and the technological innovation that will lead us towards a sustainable interaction with the natural environment. It is inherently inter-disciplinary and brings together research in environment, climate, society, economics, human health, complex eco-systems, data science and AI.
    The Joint Centre for Excellence in Environmental Intelligence (JCEEI) is a world-leading collaboration between the UK Met Office and the University of Exeter, together with The Alan Turing Institute and other strategic regional and national collaborators. This centre of excellence brings together internationally renowned expertise and assets in climate change and biodiversity, with data science, digital innovation, artificial intelligence and high-performance computing.
    The JCEEI’s Climate Impacts Mitigation, Adaption and Resilience (CLIMAR) framework uses Data Science and AI to integrate multiple sources of data to quantify and visualise the risks of climate change on populations, infrastructure and the economy in a form that will be accessible to a wide variety of audiences, including policy makers, businesses and the public.
    CLIMAR is based on the Intergovernmental Panel on Climate Change’s (IPCC; https://www.ipcc.ch) risk model that conceptualises the risk of climate-related impacts as the result of the interaction of climate-related hazards (including hazardous events and trends) with the vulnerability and exposure of human and natural systems.  Hazards are defined as ‘the potential occurrence of a natural or human-induced physical event or trend or physical impact that may cause loss of life, injury, or other health impacts as well as manage and loss to property, infrastructure, livelihoods, service proposition, ecosystems, and environmental services.’; exposures ‘The presence of people, livelihoods, species or ecosystems, environmental functions, services, and resource, infrastructure, or economic, social or cultural assets in places and settings that could be adversely affected.’; and vulnerability ‘The propensity or predisposition to be adversely affected’, which encompasses sensitivity or susceptibility to harm and lack of capacity to cope and adapt.
     
    A mathematical model is used to express the risk of a climate related impact, e.g. an adverse health outcome associated with increased temperatures or a building flooding in times of increased precipitation. Risk is defined as the probability that an event happens in a defined time period and location and is a combination of the probabilities of the hazard occurring together with probability models for exposure and vulnerability. In the simplest case, the probabilities (of hazard, exposure and vulnerability) would be treated as independent, but in reality the situation is much more complex and the different components will often be dependent on each other), which requires conditional probability models to be used. For example people’s exposures to environmental hazards (e.g. air pollution) may be dependent on their vulnerability (e.g. existing health conditions.
    The UKCP18 high-resolution climate projections are used to inform models for hazards and provide information on how the climate of the UK may change over the 21st century (https://www.metoffice.gov.uk/research/approach/collaboration/ukcp/index). This enables the exploration of future changes in daily and hourly extremes (e.g. storms, summer downpours, severe wind gusts), hydrological impacts modelling (e.g. flash floods) and climate change for cities (e.g. urban extremes). The headline results from UKCP18 are a greater chance of warmer, wetter winters and hotter, drier summers, along with an increase in the frequency and intensity of extremes. By the end of the 21st century, all areas of the UK are projected to be warmer and hot summers are expected to become more common. The projections also suggest significant increases in hourly precipitation extremes, with the rainfall associated with an event that occurs typically once every 2 years increasing by 25%, and the frequency of days with hourly rainfall > 30 mm/h almost doubling, by the 2070s; increasing from the UK average of once every 10 years now to almost once every 5 years.
    CLIMAR is currently being used in a range of real-world applications based on the UKCP18 projections across sectors that will be affected by changes in the climate, including energy system security, telecommunications, critical infrastructure, water and sewage networks, and health. Two examples are:
    working with Bristol City Council on the effects of climate change on urban heat, inequalities between population groups and the efficacy of methods for adapting building stock (e.g. improved ventilation, double glazing) to keep people cool, and safe, in periods of extreme heat;
      working with a consortium led by the National Digital Twin Programme and the Centre for Digital Built Britain to develop a Climate Resilience Demonstrator, integrating climate projections with asset information and operational models to develop a Digital Twin that can be used to assess the future risks of flooding on critical infrastructure including energy, communications and water and sewage networks. This will provide a step-change in our understanding of the potential effects of climate change on critical infrastructure and demonstrates the power of inter-disciplinary partnerships, spanning academia and industry, that will be crucial in unlocking the enormous potential for Digital Twins to enhance our resilience to climate change across a wide variety of sectors. For further information on CLIMAR and associated projects, please see https://jceei.org/projects/climar/ and for information on the National Digital Twin Climate Resilience Demonstrator (CreDo) see https://digitaltwinhub.co.uk/projects/credo/
    Read more...
    The bigger and more complicated the engineering problem, the more likely it is to have a digital twin. Firms that build rockets, planes and ships, for example, have been creating digital twins since the early 2000s, seeing significant operational efficiencies and cost-savings as a result. To date, however, few firms have been able to realise the full potential of this technology by using it to develop new value-added services for their customers. This article describes a framework designed to help scale the value of digital twins beyond operational efficiency towards new revenue streams.
    In spite of the hype surrounding digital twins, there is little guidance for executives to help them make sense of the business opportunities the technology presents, beyond cost savings and operational efficiencies. 
    Many businesses are keen to get a greater return on their digital twins’ investment by capitalising on the innovation – and revenue generating - opportunities that may arise from a deeper understanding of how customers use their products. However, because very few firms are making significant progress in this regard, there is no blueprint to follow. New business models are evolving but the business opportunities for suppliers, technology partners and end-users is yet to be fully documented. 
    Most businesses will be familiar with the business model canvas as a tool to identify current and future business model opportunities. Our 4 Values (4Vs) framework for digital twins is a more concise version of the tool, developed to help executives better understand potential new business models. It was designed from a literature review and validated and modified through industry interviews. 
    The 4Vs framework covers: the value proposition for the product or service being offered, the value architecture or the infrastructure that the firm creates and maintains in order to generate sustainable revenues; the value network representing the firm’s infrastructure and network of partners needed to create value and to maintain good customer relationships; and value finance such as cost and revenue structures. 
    Four types of digital twin business models
    From extensive interviews with middle and top management on services offered by digital twins, we identified four different types of business models and applied our 4Vs approach to understand how those models are configured and how they generate value. 
    Brokers 
    These were all found in information, data and system services industries. Their value proposition is to provide a data marketplace that orchestrates the different players in the ecosystem and provides anonymised performance data from, for example, vehicle engines or heating systems for buildings. Value Finance consists of recurring monthly revenues levied through a platform which itself takes a fee and allocates gives the rest according to the partnership arrangements.
    Maintenance-optimisers 
    This business model is prevalent in the world of complex assets, such as chemical processing plants and buildings. Its value proposition lies in providing additional insights to the customer on the maintenance of their assets to provide just-in-time services. What-if analysis and scenario planning are used to augment the services provided with the physical asset that is sold. Value Architecture is both open and closed, as these firms play in ecosystems but also create their own. They control the supply chain, how they design the asset, how they test it and deliver it. The Value Network consists of strategic partners in process modelling, 3D visualisation, CAD, infrastructure and telecommunications. Value Finance includes software and services which provide a good margin within a subscription model. Clients are more likely to take add-on services that show significant cost savings.
    Uptime assurers 
    This business model tends to be found in the transport sector, where it’s important to maximise the uptime of the aircraft, train or vehicle. 
    The value proposition centres on keeping these vehicles operational, either through   predictive maintenance for vehicle/aircraft fleet management and, in the case of HGVs, route optimisation. Value Architecture is transitioning from closed to open ecosystems. There are fewer lock-in solutions as customers increasingly want an ecosystems approach. Typically, it is distributors, head offices and workshops that interact with the digital twin rather than the end-customer. The Value Network is open at the design and assembly lifecycle stages but becomes closed during sustainment phases. For direct customers digital twins are built in-house and are therefore less reliant on third-party solutions. Value Finance is focused on customers paying a fee to maximise the uptime of the vehicle or aircraft, guaranteeing, for example, access five days a week between certain hours. 
    Mission assurers
    This business model focuses on delivering the necessary outcome to the customers. It tends to be found with government clients in the defense and aerospace sector. Value propositions are centered around improving efficacy of support and maintenance/ operator insight and guaranteeing mission success or completion. These business models suffer from a complex landscape of ownership for integrators of systems as much of the data does not make it to sustainment stages. 
    Value Architecture is designed to deliver a series of digital threads in a decentralised manner. Immersive technologies are used for training purposes or improved operator experience. Value Network is more closed than open as these industries focus on critical missions of highly secure assets. Therefore, service providers are more security minded and careful of relying on third-party platforms for digital twin services. Semi-open architecture is used to connect to different hierarchies of digital twins/digital threads. Value Finance revealed that existing pricing models, contracts and commercial models are not yet necessarily mature enough to transition into platform-based revenue models. Insights as a service is a future direction but challenging at the moment, with the market not yet mature for outcome-based pricing.
    For B2B service-providers who are looking to generate new revenue from their digital twins, it is important to consider how the business model should be configured and identify major barriers to their success. Our research found that the barriers most often cited were cost, cybersecurity, cultural acceptance of the technology, commercial or market needs and, perhaps most significantly, a lack of buy-in from business leaders. Our 4Vs framework has been designed to help those leaders arrive at a better understanding of the business opportunities digital twin services can provide. We hope this will drive innovation and help digital twins realise their full business potential.  
    ---------
    Our research to date has been through in-depth qualitative interviews across industry but we wish to expand this research and gather quantitative information on specific business model outcomes from digital twins across industry. 
    If you would like to support this research and learn more about the business model outcomes from digital twins, then please participate in our survey! 
    Take part in our survey here:     https://cambridge.eu.qualtrics.com/jfe/form/SV_0PXRkrDsXwtCnXg 
    Information sheet.pdf
    Read more...
    Like many companies Atkins, a member of the SNC-Lavalin group, is investing heavily in digital transformation and we all know that skills are a key enabler.  We wanted to be clear about the skills needed by our workforce to be able to deliver digitally.  The starting point was finding out what digital skills we have in the company.  Then we could identify gaps and how we might bridge them.
    But what are digital skills…and how do we measure them?
    As we pondered this, we realised that there were many, many challenges we would need to address.  Atkins is a ‘broad church’ comprising many professionals and technical specialisms.  Digital transformation is challenging the business in many different ways.  Articulating a single set of digital skills that reflects needs across the business is complicated by language, terminology and digital maturity.  Furthermore, unlike corporate engagement surveys, there is no established industry baseline that we can use to benchmark our corporate digital skills against.  To evaluate a skills gap would require an estimate of both the quantity and types of skills that will be required in the future – something that is far from certain given our industry’s vulnerability to disruption.
    We knew we were trying to grasp something universal and not sector or domain specific, so this is the definition we decided to use: Digital skills enable the individual to apply new/digital technology in their professional domain. 
    That left the question around how we measure Digital Skills.
    We did some research and explored several frameworks including Skills For the Information Age (SFIA), the EU Science Hub’s DigiComp and the DQ Institute’s framework.  As we were doing this, we became aware that the CDBB Skills and Competence Framework (SCF) was being launched and we immediately sensed it could be just what we were looking for. 
    Why?  Apart from being right up to date, it has a simple and straightforward structure and is capable to be tailored for an organisation.  The proficiency levels are very recognisable - Awareness, Practitioner, Working and Expert - and it is in the public domain.  But most importantly it seemed like a good fit because most of what we do at Atkins is in some way related to infrastructure and therefore is within the domain of digital twins. 
    But we needed to test that fit.  Our hypothesis was “…that the CDBB SCF had sufficient skills to represent the ability of our staff”.  We tested this with a survey, interviews, and a series of workshops.
    In the survey we looked at how individuals from different groups in the company (differentiated by their use of technology) understood the 12 skill definitions and the extent to which they needed and used each skill in their day-to-day role.  We also explored whether there were other digital skills that respondents felt were not recognised by the framework. We followed up the survey with interviews to clarify some of the responses and then used workshops to play back our findings and sense-check the conclusions.
    Our overall conclusion was that we had good evidence to support our hypothesis, i.e. that the CDBB SCF was a good fit for our workforce.  However, we realized we would need to bring the indicators to life so that users could relate them to their roles, particularly with people at the Awareness and Working Levels.  This is not unexpected.  Generally, people with lower levels of competence don’t know what they don’t know. 
    Another conclusion was that we needed a fifth, null competency indicator to recognise that not everyone needs to know about everything.
    In terms of next steps, we are working with a supplier to develop an online assessment tool so that we can apply the framework at scale.  We have rewritten the skills definitions and competence indicators to omit any reference to the NDT programme etc. although these were very minor changes.
    We are working on ways to bring the skill level indicators to life for our employees e.g. through guidance materials, FAQs etc.  We’re also developing an initial ‘rough and ready’ set of learning materials related to each of the digital indicators at Awareness and Practitioner levels.  We expect the CDBB’s Training Skills Register to feature prominently in this!
    Some of things we have parked for the moment are: (1) Moderation and accreditation of the assessments.  Our first wave will be self-assessed only, and (2) Integrating the skills into role definitions.
    We’re very grateful to CDBB for the timely creation of the SCF and I look forward to sharing our onward journey with the DT Hub community.
    Read more...
    Anglian Water is an early adopter of digital twins within the water sector, working closely with the Centre for Digital Built Britain to help develop the market and showcase how digital twins can support an organisation’s strategic outcomes.
    Anglian Water has a 15 year vision to develop a digital twin to sit alongside its physical assets.

     
    From an Anglian Water perspective, the Digital Twin is essentially an accurate digital representation of their physical assets, enabling insight, supporting decision making and leading to better outcomes. Aligning the digital twin objectives to Anglian Water’s regulated outcomes, as defined by the regulator OFWAT, has been a key step in developing the business case.
    With the initial vision and roadmap outlined the next step on the roadmap was to implement a proof of concept, to explore the value created from digital twins. Anglian Water undertook a discovery phase and a Proof of Concept with Black and Veatch for a Digital Twin back in 2019, and started to define how a Digital Twin would benefit the delivery and management of physical assets.
    The discovery phase looked to understand the current landscape, further enhancing the vision and roadmap, and establish persona requirements. It proved vital to really understand the organisation and the impact on people during this early exploratory work.
    The proof of concept looked at delivering three main outputs, focused on a pumping station to keep the scope focused and value measurable:
    To demonstrate an asset intelligence capability To demonstrate a visualisation capability To examine the asset data and architecture. Alongside the proof of concept other initiatives were kick started to consider how other elements of digital twin might add value, with a focus on more enhanced use of hydraulic models to explore how water networks could be further optimised.  Anglian Water recognised early on that by integrating and enhancing many of the existing enterprise systems, existing investments could be leveraged and technology gaps identified.
    Learning from the proof of concept and other early works Anglian Water looked to the next step of the roadmap, a scaled demonstrator on the Strategic Pipeline Alliance. The Strategic Pipeline Alliance was set up to deliver up to 500km of large scale pipeline, and alongside this to start defining and delivering the first phase of the digital twin. SPA has a 2025 vision is to deliver a large-scale, holistically linked water transfer resilience system. This will be operated, performance managed and maintained using advanced digital technology.
    The SPA team set about developing a digital twin strategy which is based on the wider corporate vision and enhances the proof of concept work. The basic premise of the SPA digital twin is to integrate traditionally siloed business functions and systems, to deliver enhanced capability across the asset lifecycle.
    As with Anglian Water the SPA strategy is focused on using the technology available and developing a robust enterprise, integration, and data architecture to create a foundation for digital twin. Taking this a step further it was decided to adopt a product based approach, thinking about the development of digital twin products aligned to physical assets, that could be re-used across the wider Anglian Water enterprise.
    This whole life product based approach threw up some interesting challenges, namely how to build a business case that delivered benefit to SPA and also enabled Anglian Water’s future ambitions, taking a lifecycle view of the value delivered.
    To achieve this meant considering and assessing the value to both SPA during the capital delivery phase and Anglian Water during the operational phases. This process also highlighted that certain elements of the digital twin deliver value to both SPA and Anglian Water equally and could therefore be considered as a shared benefit.
    The resulting benefits register helped to identify the value delivered to the alliance partners which was vital to securing the delivery board sign off. As Anglian Water are a partner in the alliance, the ability to demonstrate value in the operational phase with SPA developing the technical foundation, was another key element in securing the investment.
    As part of the overall process the SPA board were keen to see how the investment would be allocated, therefore the strategy evolved to incorporate the capabilities to be developed within SPA to enable digital twin. This helped to inform and validate the team for digital twin delivery.
    With the capabilities and organisational chart resolved, a governance framework was put into place to allow the digital twin evolution to be managed effectively, putting in place the right checks and balances. This has included input and oversight from the wider Anglian Water team as ultimately, they will be responsible for managing the various digital twins long term.
    To validate the digital twin against the SPA outcomes and objectives, the various elements of the digital twin were incorporated into the overall enterprise architecture. This has proved to be an important part of the process to ensure alignment to the wider capabilities and importantly ensure the right technology is in place. The enterprise architecture continues to evolve to include information objects below the application layer, again building on the product based approach, so that the enterprise architecture can be utilised in the wider Anglian Water Alliances.
    In total the development of the strategy, business case and capabilities has taken 6 months, however it is important to note that this builds on the earlier proof of concept and ideation during the initial mobilisation of SPA. Given the approach a key next step is to work with Anglian Water to explore accelerated deployment of SPA digital twins on other major schemes, to put to test the product approach and maximise the investment made.
    We have learnt from the early developments on SPA that articulating a whole life view of value is vital and that focusing on capital / operational stages is equally important, so that appropriate budget holders can see the value being delivered. We have also learnt the importance of having a bold vision which must be matched by clear definition of the first few steps, showing a long term roadmap for achieving an enterprise digital twin.
    What is certainly clear is that we still have a lot to learn, however by following good architectural best practice and placing people and our environment at the heart of digital twin, we have put in place a good foundation from which to build.
    If you would like to know more, please get in touch through the DT Hub.
     
    Read more...
    How manufacturers can structure and share data safely and sustainably. 
    Manufacturers of construction products produce a significant part of the information required to bring about a safer construction industry, but currently, this information isn’t structured or shared in a consistent way.
    If UK construction is to meet the challenges of a digital future and respond to the requirements of a new building safety regulatory system, it needs manufacturers to structure and share their data safely and sustainably.
    There’s no need to wait to digitise your product information. Making the correct changes now will bring immediate benefits to your business and long-term competitive advantage. This guide will help you identify what those changes are.
    Our guide helps decision-makers in manufacturing identify why supplying structured data is important, how to avoid poor investment decisions, safe ways to share information about products across the supply chain, and more.
    The Guide  https://www.theiet.org/media/8843/digitisation-for-construction-product-manufacturers-main-guide.pdf 8 Page Summary https://www.theiet.org/media/8856/digitisation-for-construction-product-manufacturers-summary.pdf
    2 Page  Key facts and Summary  https://www.theiet.org/media/8856/digitisation-for-construction-product-manufacturers-summary.pdf
    Read more...
Top
×
×
  • Create New...