By Darius Okle, Interlate Senior Metallurgist.
We are always looking at modelling. In particular, the type of modelling done by plant mets on a quiet Thursday afternoon, the type of modelling I used to do in Microsoft Excel with the analysis tool pack. Looking for proof of the relationships I knew existed and looking for validation that I understood the plant better than everyone else.
To get technical, it’s empirical modelling. Using R and Python for analysis, tableau for visualisation and a team of data scientists, empirical modelling is an offering we have embraced. From a personal perspective, it’s something that SMEs (subject matter experts) and data scientists are equally passionate about. From an industry perspective, there is clearly untapped value to be delivered.
This brings us to the three modelling applications that we currently offer that bring the most to mining operations:
Softsensors for accurate sensor measurements
Softsensors or sensors that produce inferences or measurements in the plant from other sensors available vary in their accuracy and practicality. The best are usually built from an underlying engineering or physical phenomena. Such as a torque sensor used on a rotating drum can give an indication of the density of the feed and therefore the feed grade. Its accuracy depends on how many other variables are affecting it and being measured. Its usefulness depends on how accurate and reliable the measurement is and what types of decisions it allows us to make.
The calibration of the instrument or softsensor is what makes it accurate. This predictability and assurance of its accuracy is the difference between a “sensor” and a “softsensor”. Keeping this in mind the development of softsensors is something that the team takes great pride in completing with input from SMEs, data scientists and our grumpy statistician to deliver to our clients.
In benchmarking or baselining, we are trying to do more than a forecast. With forecasting, there is the estimate from the mine for tonnes and grades, followed by an estimate on throughput and a loaded aggregate on equipment availability. For benchmarking, you are looking most commonly at the previous day. More advanced applications can look at the measurements as they come in and give a live benchmark.
The goal though is the same for both to separate out the impact of the variables you have no control over to the impact of the variables you can influence. Giving targetted and data-driven focus to the areas of the plant that have the largest opportunity for improvement. As much a tool for the plant metallurgist to figure out what to focus on as it is a tool for communication and managing up.
Optimisation can be difficult to discuss without first addressing the logical conclusion to driving questions. Driving questions are what we ask our clients to pose, in a way that helps drive our descriptive analytics and guides the work in a way to give information that they can walk away with and apply to the plant straight after our closeout presentations. These questions are usually, “What throughput gives the best performance?” “What grade gives me the highest recovery?” “What combinations of float parameters should we use for treating ore type West54?” These are great questions to frame what the KPI is but are complicated to answer and extremely difficult to answer with the techniques or applications that don’t include modelling. The solution is to therefore layer the models upon each other taking care of all the convolution, combinations and permutations that you can get with subsequent simulations to seek an effective solution.
For optimisation, the empirical model is built to address the question constructed. For example, let’s examine a polymetallic float, lead/zinc/silver. Produced in series a lead concentrate, then a zinc concentrate both with silver payable units. The question would be: “What should my grade targets be for my products to produce the highest return?” This is complicated by the different relationships that exist in the plant for different feed types. As operators, we know this is because of geometallurgy and the way these relationships change for different throughputs, p80, circuit configurations and blends.
So the model is constructed to take all of that into account, following that a mass balance, feeding into the payable terms and shipping parameters. Then we constrain the output for engineering and operational limits. Finally, after its construction, we are ready for optimisation. We used to try and interrogate our results with heatmaps and iterations, but now with effective work from our data scientists, we can do it with maths and machine learning. This approach can be bundled up into an application so you can ask a few different types of questions, and if you’re ready for it, incorporate emissions data and start discovering the real hidden costs of your operation.
Hopefully, that’s given some good insight into how models can be used to help operations make decisions and improve plant performance. This is proven stuff, it can work well, it makes sense and often the payback couldn’t be any faster. Our long-term plan is to work towards a combination of optimisation and benchmarking to provide a dynamic benchmarking or KPI target for plant operation. This would involve the benchmarking and optimisation techniques we have talked about combined with the systems employed in matchmaking systems seen in competitive sports and video games to estimate expected performance and adjust ranking or in this case, target KPIs for the shift.
If you are interested in working with us to develop and mature our dynamic benchmarking or want to have a closer look at some of our mature products, please get in touch.