This website uses cookies to enhance browsing experience. Read below to see what cookies we recommend using and choose which to allow.
By clicking Accept All, you'll allow use of all our cookies in terms of our Privacy Notice.
Essential Cookies
Analytics Cookies
Marketing Cookies
Essential Cookies
Analytics Cookies
Marketing Cookies
This is the first part in what will be a multi-part series outlining some of the limitations imposed by trying to fit machine learning algorithms to traditional rock mass classification systems. As practitioners we need to evaluate the rock mass, identify possible failure modes, and work on reducing uncertainty in our geotechnical model. If this means adding resolution to the structural model, assessing the impacts of alteration on rock strength, or better characterizing and modelling our weathering horizons, then we need to identify and look for the details that matter. If our goal is to add resolution and confidence to the inputs to the geotechnical domain model, why are we restricting ourselves with interval-based classification systems (Figure 1)?
The onset of big data and automation in this field is going to challenge us to rethink rock mass classification, and how we can leverage automated logging to generate high resolution, high quality, and highly reliable data that we can use to inform our rock mass characterization. Our goal should be to maximize the value of data that have already been collected. This article will use SRK’s work on core photo image recognition for practical examples of using machine learning for geotechnical characterization.
Rock mass characterization at the greenfield stage involves detailed geotechnical logging of diamond drill core, generally using the Rock Mass Rating (RMR) system (Bieniawski 1976, 1989; MRMR from Laubscher 1990 and Laubscher & Jakubec 2000) or the Norwegian Geotechnical Institute’s Q-system (Barton et al., 1974).
These systems are always confined to interval-based logging, generally based on core barrel lengths of 1.5 or 3.0 m. This is because we are limited to what is practically achievable (nobody has time to log core in 5 cm intervals); and we are limited to interval-based definitions of key input parameters like fracture frequency and the Rock Quality Designation (RQD) (Deere, 1964).
“Whether in the early site investigation phases or in a later design phase, a low RQD value should be considered a “red flag” for further action.”
– Deere & Deere, 1988
RQD is not a metric that fully describes the condition of the core at a particular location as it does not provide the context for the degree of damage present. An interval of 100% RQD implies solid core and reasonable core conditions. Anything less than 100% indicates some degree of fracturing. An interval of 0% RQD can represent a range of conditions from fault gouge and brecciation; fracture spacing of 9.9 cm; or mechanically opened defects, foliation, or bedding that were incorrectly included. Things are further complicated if interval lengths are very large (> 3 m) or contain a range of conditions. Figure 2 shows a range of conditions in the rock mass, from drill breaks/mechanical damage (blue), to broken core but no gouge (yellow), to a true fault zone with gouge (red).
Automatically logging RQD hasn’t reduced the effort required to review these intervals, as the core photos still need to be reviewed to determine whether the intervals are geotechnically significant or not. If an interval is significant, there is then a need to create some separate table or field to track these intervals. New technologies and integrations make this core photo review more efficient, however it is challenging or impossible to visualize spatial relationships using direct comparison of images between multiple holes and boxes due to scale issues. If a piece of core is nominally 40‑70 mm, viewing 10 m of images of core puts you at a scale of ~1:200. Maintaining spatial accuracy on multiple drillholes means you will not see the variability between holes on section, or between sections.
Fracture frequency isn't going to help understand core conditions using automated characterization. There is still a need to define an interval or domain to calculate the fracture frequency. Fracture frequency increases towards faults and damage zones, but what arbitrary and subjective rules do we use to decide on minimum lengths to include in a domain? Even if we manage to determine the minimum interval lengths, how do we determine the fracture frequency of rubble or gouge? Do we maximize fracture frequency to 20 joints/m (RMR) or 40 joints/m (MRMR)? Does this tell us whether we have rubble or gouge present in the interval?
Most importantly, how do we know the algorithm has correctly identified a fault zone, and not drill-spin or mechanical damage? Are we increasing the uncertainty in our results by over-training our models? What are the consequences if the algorithm mis-classifies an interval? Can you rely on these outputs for design, or do you still need to revert to the core photos to ensure you aren't missing something important?
Instead of attempting to train our models on RQD, knowing we can’t even log that objectively (see RQD: time to rest in peace by Pells et al, 2017, among others); we should be rethinking our classification systems to take advantage of the increased resolution we can achieve with machine learning. The Core Damage Index (CDI) (Tims et al., 2022) lets you capture high resolution data without subjective interpretation, mainly by not restricting you within drilled runs (Figure 3).
By leveraging machine learning based image recognition, we can collect data down to whatever minimum resolution we want (in this case intervals of >= 5 cm). Instead of being concerned that intervals of significance are being blurred or lost within larger logged intervals, we can focus observations to knowing how broken the core is without having to review thousands of metres of box photos. This system is meant to be a proxy for manual review of the core photos. You can visualize this index in your 3D modelling environment and link it with your structural, lithological, or alteration models to inform your geotechnical domain model. Improving your structural model and understanding the variability within the fault zones by using this index is more valuable than having an additional 100,000 m of RQD that needs to be manually reviewed.
Most importantly, we delegate the tedious image classification to the computer and leave the interpretation of the significance of these features to the engineers and geologists.
Bieniawski ZT. 1989. Engineering rock mass classification. New York, Wiley
Bieniawski ZT. 1976. Rock mass classification in rock engineering. Explor Rock Eng, Proc Symp 1:97–106
Deere, D. & Deere, D. W. (1988). The rock quality designation (RQD) index in practice. In Rock classification systems for engineering purposes. ASTM STP 984, Louis Kirkaldie, Ed., American Society for Testing and Materials, Philadelphia, 1988, pp. 9 1- 101.
Laubscher DH. 1990. A geomechanics classification system for the rating of rock mass in mine design. Journal of South African Institute of Mining and Metallurgy. 90:257-273.
Jakubec J, and DH Laubscher. 2000. The MRMR rock mass rating classification system in mining practice. In: Proceedings of MassMin 2000, Brisbane, Australia. 9 p.
Pells, P. J., Bieniawski, Z. T., Hencher, S. R., & Pells, S. E. (2017). Rock quality designation (RQD): time to rest in peace. Canadian Geotechnical Journal, 54(6), 825-834.
Tims S, A Caté, A LeRiche. 2022. Novel approaches in geotechnical classification using machine learning. Slope Stability 2022, Tucson, Arizona.