TIC 4.0
Publication: Starting a real-time digital twin
A management perspective on building a digital twin and how TIC4.0 helps to achieve your goals.
Management Summary
How can organizations improve transparency, predict throughput, or estimate lead-times in complex operations like those at a container terminal? How can the effects of disruptions or the impact of new strategies be effectively simulated? A real-time digital twin offers answers to these questions, applicable across industries. By integrating data from sensors and systems, a digital twin mirrors real-time operations, incorporates decision logic, visualizations, and supports AI-based learning. Insights can then be applied directly to optimize real-world processes.
Purpose and Scope
This document guides management on implementing a digital twin effectively, addressing organizational needs such as replaying historical processes, testing improvements, or enhancing decision-making. Success requires clear objectives, defined KPIs, and standardized data formats to deliver actionable insights. Standardization is especially important, and TIC4.0 plays a critical role in ensuring stakeholders speak the same language throughout the implementation process.
Getting Started
Defining the digital twin’s scope—whether it models the entire terminal or specific equipment—is crucial. Start with a simple model and limited use cases, then scale complexity as needed. Multiple systems must be interconnected, and TIC4.0 standards provide standardized guidelines that simplify integration and ensure seamless communication. Begin by integrating existing sensor data and enriching it with contextual information, such as GPS locations of CHEs. Following TIC4.0 guidelines helps reduce interface complexity. Early engagement with stakeholders is vital; clearly explain the digital twin concept and define relevant use cases, documenting scope, applicability, and expected benefits. Technical teams can then assess resources and feasibility. Management should prioritize use cases based on complexity and benefits using tools like cost-benefit analysis or the Analytical Hierarchy Process (AHP). Combining related use cases can create synergies, guiding the development of a scalable, standardized digital twin.
TIC4.0’s Role
TIC4.0 focuses on standardizing the cargo-handling industry by:
Defining common process semantics and operational definitions, necessary for accurately representing processes within the digital twin.
Offering technical documentation to support seamless data integration, reducing implementation overhead.
Encouraging the adoption of standardized practices across the sector, enabling quicker system integration and lowering implementation time.
Conclusion
Standardization is key to digital twin success. By leveraging TIC4.0 standards, your organization can develop a scalable, interoperable solution that enhances operational transparency and drives efficiency from concept to execution.
Introduction: From deciding on Using TIC4.0 to deriving real-world use cases
In the rapidly evolving field of container terminal operations, the development of digital twins represents a significant step forward in managing complex logistics environments. This article aims to guide the reader through the foundational steps of aligning a digital twin with the unique needs of a container terminal, with a particular focus on leveraging the Terminal Industry Committee's TIC4.0 standards as a guiding framework for standardization and seamless integration.
A digital twin is a dynamic, virtual representation of a physical asset, process, or system. It integrates real-time data, decision logic, and models to mirror and simulate the behavior and state of its physical counterpart. Unlike a simple digital model, a true digital twin involves continuous, automated, two-way data exchange between the physical and digital environments. This ensures that the digital twin accurately reflects current conditions and can actively influence the physical system through real-time actions and recommendations.
The digital twin does more than just monitor—it can predict outcomes, optimize processes, and autonomously take or suggest actions to improve efficiency, reduce downtime, and enhance overall performance. Common applications include improving operational transparency, enabling predictive maintenance, optimizing resource use, and mitigating risks through simulation. For a better understanding of the functions, equipment, and systems that can be included in a digital twin setup for a container terminal, see the visualization to the right.
To ensure that a digital twin meets the operational demands and strategic goals of a container terminal, engaging all stakeholders is crucial. The complex nature of digital twins adds layers of complexity to the information systems and decision-support tools they enhance. However, the maritime industry, particularly in the realm of container terminals, currently lacks detailed models for conducting requirement analyses for digital twins and deriving specific use cases.
This section will outline the general approach for requirement engineering using TIC4.0 as a framework. TIC4.0 provides a standardized language and a set of guidelines that facilitate clear communication and interoperability across systems and stakeholders. By adhering to these standards, organizations can reduce implementation overhead, streamline integration, and ensure that all parties are aligned in their understanding of processes and data flows.
Effective communication is identified as a key element for involving all relevant stakeholders. By establishing trust and leveraging concrete process knowledge from employees, the scope of work can be reduced while enhancing the application of the digital twin. The analysis of use cases from a multi-stakeholder perspective not only helps in making the requirements tangible but also defines the level of technical complexity required for successful implementation as part of the decision-making process.
To accurately capture and assess the requirements for a digital twin, we suggest using a combination of methods tailored to ensure a thorough analysis with robust outcomes. The Utility Analysis, described below, evaluates the potential benefits of each use case against their operational impacts and technical feasibility. This method provides a systematic approach to quantify the benefits, allowing for objective decision-making. Additionally, the Analytical Hierarchy Process (AHP) is incorporated to manage complex decision-making scenarios where multiple criteria must be considered and weighted according to their importance. This helps in prioritizing use cases based on a structured model, making the decision-making process transparent and justifiable.
Understanding and involving the right stakeholders is fundamental to the success of any digital twin initiative. Stakeholder analysis begins with identifying all parties that have an interest in or are affected by the digital twin, including operational staff, IT personnel, management, and external partners. The key to successful stakeholder engagement lies in effective communication strategies tailored to address the concerns and expectations of each group.
To facilitate this, regular workshops and meetings are recommended, where stakeholders can voice their insights and concerns. These sessions also serve as a platform for stakeholders to contribute their unique perspectives on use cases, ensuring a holistic approach to the digital twin’s development. Trust is a crucial component in these interactions, as it encourages stakeholders to share their expert knowledge, which is vital for the detailed mapping of processes and potential improvements.
After identifying all possible stakeholders, grouping them as depicted below in a power-interest matrix can be a great way of focusing on the right ones.
By integrating the described methods and focusing on comprehensive stakeholder analysis, the groundwork for the digital twin is set, not only to meet current operational needs but also to adapt to future challenges and opportunities within the container terminal environment. This approach ensures that the digital twin becomes a dynamic tool, continually evolving through stakeholder feedback and changing operational requirements.
After identifying the correct stakeholders and establishing the primary focus of the digital twin, the requirement analysis and definition of use cases can be carried out.
Requirement Engineering
As we venture deeper into the creation of a digital twin for container terminals, it becomes imperative to engage in rigorous requirements engineering. This process is crucial in translating high-level operational needs into specific, actionable use cases that the digital twin will address. By focusing on this structured approach, we ensure that the digital twin aligns with both the TIC4.0 standards and the specific demands of the terminal's operations.
The first step in requirements engineering is the identification of suitable use cases where digital twins can significantly enhance operational efficiency and decision-making. For container terminals which are planning to adhere to TIC standards, this involves examining the current processes and pinpointing areas where digital solutions can provide substantial improvements. The potential functions of a digital twin include:
Monitoring: Continuous monitoring of equipment and operations helps maintain high standards of operational efficiency and safety.
Reporting: Automated and enhanced reporting capabilities allow for more accurate and timely information dissemination, aiding decision-makers at all levels.
Data Analytics: By leveraging data collected from various sources across the terminal, digital twins can offer insights into operations, identifying bottlenecks and opportunities for process optimization.
Simulation: Complex simulations can test responses to hypothetical situations without risking actual resources, providing a valuable tool for strategic planning and training.
Optimization: Digital twins can simulate different operational scenarios to find the most efficient approaches, reducing costs and improving service quality.
Predictions: Utilizing historical data, digital twins can forecast future conditions and outcomes, enabling proactive management of resources and better handling of potential disruptions.
Each of these functions serves as a foundation for use case ideation. By examining existing challenges within the terminal's operations and considering how these digital twin functions can address them, stakeholders can develop a robust list of potential use cases. This ideation process should involve a diverse group of stakeholders, ensuring that the use cases cover a wide range of needs and opportunities within the terminal environment. But how can the use case be documented?
Effective documentation is crucial in translating the ideated use cases into actionable tasks for digital twin implementation. Each use case should be documented with a comprehensive description that captures not only the existing information but also potential preconditions and postconditions. This ideally involves utilizing a standardized template that outlines the current ('As-is') and desired ('To-be') processes through user stories. These stories help to articulate the frequency of the problem or benefit, the general desired state, and the specifics of the use case.
User stories are a simple, effective tool commonly used in agile software development to capture a specific user requirement or feature from the perspective of the end-user. They are designed to ensure that the development team understands who the users are, what they need, and why they need it, in a concise and actionable format. A user story typically follows a simple template:
As a [type of user], I want [some goal] so that [some reason].
This format helps keep the focus on the user’s needs rather than technical specifications. User stories are intended to be short, clear, and limited to one specific feature or functionality to keep them manageable and understandable. For an entire use case a few user stories could be combined, but by limiting them you ensure that the use case does not become too large.
The documentation template should also include fields for necessary incoming and outgoing data connections, with detailed descriptions of these connections. Information on the respective units involved, current usage, existing interfaces, and any relevant data. This information will be very helpful for implementing TIC4.0 at later stages. Additionally, any legal requirements and specifications related to the availability and criticality of the use case should be recorded. Stakeholders are encouraged to fill out the template independently, providing as much detail as possible to ensure clarity and completeness. The template should include a comprehensive explanations and contact information for addressing questions that may arise during this initial documentation phase. Analysts in charge of requirement engineering may need to follow up with stakeholders for clarifications, to rephrase certain sections for better understanding, or to identify additional information needs. These needs are often addressed through subsequent stakeholder interviews or additional research.
Once the use cases are documented, the next step involves filtering them to ensure alignment with the project's scope and feasibility. Use cases are evaluated and filtered out based on several criteria:
Scope Alignment: Discarding use cases that do not fit within the project's defined scope.
Location Constraints: Excluding use cases that cannot be implemented at the project site.
Redundancy: Removing use cases that have already been implemented at the main project site or are already included in the documented use cases (duplicates).
Economic Viability: Filtering out use cases that are not economically feasible at the current time.
Relevance: Dismissing items that are not actual use cases but rather part of the requirements, or do not contribute directly to the project objectives.
This filtering process ensures that the focus remains on viable, impactful use cases that align with both the strategic goals of the digital twin initiative and the operational realities of the container terminal. This methodical approach aids in prioritizing efforts and resources towards the most beneficial and practical applications of the digital twin technology.
Now the use cases are documented and should be understood by everyone reading them. Additional information for realization is also included. Unfortunately budget or time constrains will most likely limit which use cases can be realized and even if not, it will make sense to start with those more beneficial. This is described in the next section.
Structured decision making
Following the filtering process, the next step involves a structured decision-making process to evaluate and prioritize use cases. This evaluation not only considers the level of detail required by each use case but also aims to prioritize them based on their potential impact and alignment with strategic goals, rather than simply selecting a few for implementation.
The evaluation begins by defining specific criteria that guide the assessment and prioritization of each use case. Criteria are derived from the strategic objectives of the project stakeholders and tailored to the goals of the research project. The criteria used in the evaluation process include:
Process improvements: Savings in process costs due to saved time, monetary savings, or improvements in information flow.
Work safety: Enhancements to the safety of operational procedures.
Process level: Whether the use case involves core operational processes or supporting processes.
Ecological sustainability: The environmental impact of the use case.
Estimated implementation duration and costs: Feasibility and budget considerations.
Scalability: Applicability of the solution across different equipment, systems or terminals.
Expandability: Potential to lay the groundwork for additional processes or functionalities.
Strategic relevance and longevity: Importance of the use case in the face of future operational changes.
Most likely, a detailed analysis of each point would be too time consuming. Therefore, these criteria should be descriptions based on realistic expectations.
But how can these criteria be structured and is it possible to evaluate them without first having to carry out a comprehensive analysis for each individual use case? Furthermore, how can I compare several assessments of the expected process improvements for different use cases in order to decide which use case should be implemented?
The Utility Analysis method is a valuable tool for exactly this comparison and decision-making process. It is often chosen for its simplicity and ability to include a broad range of evaluators, and both qualitative and quantitative criteria. It is described below. Beforehand, stakeholders should ensure that the significance of the use case in relation to these criteria is well-documented using a use case template developed in earlier phases.
Selection and Evaluation of Criteria
Stakeholders from diverse groups, including operations, IT, maintenance, and on-site operational staff, are often involved in evaluating the use cases. The evaluators can also be defined by the power interest matrix, but should come from different disciplines or departments. Evaluators are assigned randomly to prevent any bias in the assessment, and ideally use a five-point scale to rate each use case (More information - https://en.wikipedia.org/wiki/Likert_scale). Additional support in the form of guidelines, descriptive texts, and examples should be provided to maintain consistency in understanding among evaluators.
After all evaluations are complete, the weighting of criteria is discussed and finalized with all project participants. A method to support this is the Analytical Hierarchy Process (AHP) to define a ranking between criteria, resulting in percentage values indicating the importance of each criterion. These percentages are then applied to normalize the scores and finalize the prioritization.
The mean value for each criterion is calculated based on stakeholder ratings, and this value is multiplied by the defined percentage weight to derive the final evaluation score. This score establishes the preliminary prioritization of use cases, guiding the subsequent phases of the digital twin development. This rigorous evaluation ensures that the selected use cases are not only technically and economically feasible but also align closely with the strategic and operational priorities of the container terminal.
Lets look at an example of the use case of monitoring the tire pressure of CHE to look at how this can be done practically:
Criterion | Value | Ratings (5 - very good; 1 - very bad) | Weight (in %) |
---|---|---|---|
Implementation Costs | Tire pressure data is already recorded and stored by the onboard unit. Implementation costs are limited and mainly include integration and reformatting of data as well as analysis for developing a system that alerts technical staff to replenish air between shifts. | User 1: 4 User 2: 5 | 25 |
Scalability | Tire pressure data monitoring can be implemented for all vehicles newer than 2012 (about 60%) and is also applicable for other container terminal locations. | User 1: 3 User 2: 3 | 50 |
Process improvements | Breakdowns during operations are less likely (currently account for 5% of all breakdowns). Increases in safety as well as lower costs due to implementation of a minimum acceptable tyre pressure warning to operations and maintenance. | User 1: 4 User 2: 2 | 25 |
… | … | … | … |
Evaluation | Calculation = ((4+5)/2)*0,25 + ((3+3)/2)*0,5 + ((4+2)/2)*0,25 → Result in next column → | 3,375 | (score out of 5) |
After all use cases have been evaluated we would suggest choosing a limited amount of the best-rated use cases (this ensures that the digital twin's scope is not too large to begin with) and evaluating them even more in-depth. For doing so the following questions can be beneficial:
Which technical solutions need to be implemented for each use case? Each new technical part should be listed and a table could be created listing each use case and its technical parts. An example would be a function for straddle carrier data ingest or a weather data API for providing additional information. Multiple use cases might include weather data or would need data of the straddle carrier to be sent to the data lake or some other form of centralized data storage. This will quickly show synergies and the list of different technical functions also shows in which parts TIC4.0 should be included.
What use cases are absolutely necessary because of stakeholder or project requirements?
Which use cases need to be done in sequence? Some use cases might be depended on each other. A waiting-time analysis for example might only be possible once the operational data is fully integrated and enriched with vessel information. Thus the use cases should be done in sequence which will also support the structure of the project timeframe at a later stage.
Based on these questions and the previous evaluation the best X use cases can be selected by management and the digital twin build can be started.
Outlook: What now?
With the rigorous evaluation and prioritization of use cases complete, the path forward involves a focused approach to implementing the selected use cases for the digital twin. This strategic direction not only enhances operational efficiency but also aligns with the overarching goals of the terminal. The implementation process begins with detailed planning and integration of technical solutions for each selected use case. Coordination among stakeholders is essential to ensure that all technical components, such as data ingestion functions and APIs, are seamlessly integrated and adhere to the TIC4.0 standards. These standards provide the standardized guidelines needed to reduce complexity and ensure consistent data communication across different systems.
Creating a comprehensive integration plan that outlines technical requirements and identifies synergies between use cases is crucial. This plan will help optimize resources and streamline implementation. TIC4.0’s structured approach ensures that all stakeholders operate with a common language, facilitating smooth integration and quicker deployment.
Setting clear milestones and timelines for each use case is also essential for tracking progress. Regular review meetings should be scheduled to monitor advancement, adjust plans if needed, and maintain alignment with project objectives. Continued stakeholder communication and involvement are key to addressing concerns and keeping all parties informed and engaged throughout the project lifecycle. The application of TIC4.0 standards plays a vital role in this communication, ensuring that all stakeholders have a unified understanding of the data and processes involved.
Our Top 5 Use Cases
Based on the detailed evaluation and the strategic importance of the use cases, in our example for the EUROGATE TwinSim project, the following have been selected as the top 5 to potentially spearhead the digital twin initiative:
CHE Positional Data Integration: Enhancing real-time operational visibility and decision-making capabilities by integrating straddle carrier telemetry data into the digital twin. Making the overall current state visible and allowing for in-depth analysis.
Weather Data Utilization: Incorporating real-time weather data through an API to improve operational planning and reduce weather-related disruptions.
Operational Data Enrichment with Vessel Information: This use case enables a comprehensive analysis of vessel arrival times and loading/unloading efficiency, crucial for optimizing port operations.
Waiting-Time Analysis: Analyzing and reducing waiting times by leveraging enriched operational data, which can significantly enhance throughput and customer satisfaction.
Breakdown Forecast Based on IoT Data for CHE: Implementing predictive maintenance strategies by analyzing IoT data collected from CHE.
These use cases represent a balanced mix of quick wins and strategic projects that will provide significant benefits in both the short and long term. Their implementation will not only demonstrate the value of the digital twin but also set a solid foundation for future expansions and many other use cases. The documentation provided with TIC4.0 served us both to understand the to-be-expected data (e.g., for use cases regarding CHE) and to integrate it into further processing, such as machine learning or visualizations. An example of this would be that the data science department already knows which values will be sent as soon as the data infrastructure department provides the data. Generally, most, if not all use cases as well as departments, benefit from the implementation of data standards and easy integration of data sources, as is the case when using TIC4.0.
Final Steps
As the top use cases take shape, it is vital to monitor their impact and learn from initial deployments to refine and adjust the approach for subsequent phases. Continuous improvement based on real-world feedback and performance metrics will ensure the digital twin evolves to meet the terminal’s operational needs and technological advancements. This iterative approach allows for scalability, potentially adding more complex and integrative use cases in the future, thereby maximizing return on investment and enhancing overall terminal efficiency.
The consistent application and integration of TIC4.0 standards throughout these developments are crucial. These standards ensure that the digital twin aligns with industry best practices and facilitates interoperability across systems and stakeholders. By adhering to TIC4.0, the digital twin initiative not only enhances its efficiency and effectiveness but also contributes to the broader industry goal of standardizing operations and data usage within container terminals and beyond. This strategic alignment with TIC4.0 ensures that the digital twin remains future-proof, scalable, and ready for further expansion while delivering long-term value across the terminal’s operations.
© Copyright - TIC 4.0 All rights reserved | Design web by Fundación Valenciaport