In the Artificial Intelligence Lab, we use the latest machine learning technologies to create sophisticated models from your data. Our particular strength lies in modelling physical processes by combining machine learning and physical modelling. To this end, we work with our SimLab for simulation-based feature generation, which allows us to introduce features that are difficult or expensive to measure, or that cannot be measured at all. We support you with our bundled expertise throughout the entire life cycle, from the initial proof of concept to the iterative optimisation process up until the user-friendly deployment of your model.
Do you have a process or system that is inherently complex and you want to take it to the next level of efficiency or simply gain a deeper understanding? Do you have historical data about the process and wonder how this data can be used to achieve your goal? Ask us! We are engineers and know how to get the most out of your data. Let us make it happen for you.
Data Science Portfolio
Simulation based surrogate models using the response surface methodology typically used for optimization purposes or for parameter sensitiviy analysis.
Experimental, simulation or mixed data classification and regression models for the use as forward model in control algorithms where physical modelling is too complex or has too long turnaround time.
ML toolbox providing most common algorithms like SVM k-means, gradient boosting and random forests.
Toolbox for advanced data manipulation, automatic hyperparameter optimization and model selection.
Optimization toolbox for machine learning pipelines using genetic programming.
Advanced data manipulation and optimization toolbox with a variety of human-readable result representations.
Sorry, no posts matched your criteria
FAQ Toggle Section
Digital twins or response surfaces can be derived both from experimental and computational data. In some cases, where experimental data cannot be provided due to a lack of accessibility or a lack of measurement technology, we simulate the system computationally and generate the missing data.
Yes, this belongs to maintenence and is an integral part of the life cycle management.
The frequency depends on the modelled system and its input data. For technical systems that operate under controlled conditions and which have a very low wear-rate, training intervals can become very large. For highly dynamic systems with a high level of stochasticity, training intervals are much shorter.
Machine learning models can be monitored during the deployment to observe the accuracy of the predictions. When the input data characteristics change significantly over time this can be obsereved in a drop of accuracy. This is the time where the model needs to be retrained to account for the changes in data characteristics.
We follow a white-box approach and provide the full code and data manipulation strategies inclunding all settings and hyper parameters to ensure replicability