Digging Deeper: Open Source Alternatives to Pit Optimisation in Python
Earlier, I published an article detailing how I did some vibe coding to build a Python-based implementation of a pit optimiser using the Pseudoflow algorithm. While it is a powerful tool well-suited to this task, it has a significant limitation: it is not open source. According to its license, it may be used for educational, research, and not-for-profit purposes without a signed licensing agreement. However, anyone wishing to use it for commercial applications must contact the original developer to obtain a commercial license. This limitation creates a real obstacle for practitioners and developers who want to integrate pit optimisation into open‑source mining tools, commercial software, or industrial workflows without running into legal or financial hurdles. And honestly, it doesn’t work for me either. Imagine wanting to start your own mining consultancy but too broke to buy commercial mining software. Even if you build your own tool, you can’t legally use it to make a profit because of licensing restrictions — you’re basically doomed before you even start. But for every wall that stands in your way, there’s always a way through. And that’s exactly where open source changes the game. It levels the playing field — you don’t need deep pockets, just skill, time, and the determination to build something that works. And just like the Bear Grylls meme... That’s why I’ve decided to shift gears and focus on building an open‑source alternative to the Ultimate Pit Limit (UPL) optimiser — free, transparent, and accessible to anyone, whether for research or for business. And this is just the starting line. Pit optimisation is only the first step toward a bigger vision of open‑source mining software — a future where tools aren’t locked behind paywalls or buried in restrictive licenses. Pseudoflow is powerful, no doubt. But it doesn’t fit that vision. Open source does. Exploring Open Source Alternatives Looking for an open‑source alternative to pseudoflow for pit optimisation, I asked my "LLM advisor" for some pointers and ended up with two python graph libraries that can handle max‑flow/min‑cut algorithms — the core of pit optimisation logic. These are iGraph and PyMaxflow. Both are fully open source and widely used in the graph theory and computer vision communities, respectively. 1. igraph igraph is a general-purpose graph library available in Python, R, and C, widely used for network analysis and graph theory tasks. Interestingly, it also includes a maximum flow solver, which I initially overlooked. My original pit optimiser used igraph solely for graph construction, while the flow computation was handled by the external Pseudoflow library. Only later did I realise (thanks to Chat GPT) that igraph itself can solve max-flow problems. igraph is released under the GNU General Public License (GPL), making it fully open source and suitable for both academic and commercial use — as long as GPL license terms are respected. Algorithm Used igraph implements the Push-Relabel algorithm (also known as the Goldberg-Tarjan algorithm), a well-known method for solving the max-flow problem. This algorithm is efficient and generally performs well for large and dense graphs, though it may not be as fast as some specialised implementations for certain sparse or structured inputs, like mining block models. I believe that some commercial pit optimisation software also implements Push-Relabel (or variants of it), likely due to its balance between theoretical guarantees and practical efficiency. This further supports the idea that igraph, while not originally designed for mining, can serve as a viable backbone for prototyping or even powering lightweight open-source optimisers. Usage You can find the repo for the pit optimiser using igraph here: https://github.com/m-r-v-n/pit-opt-igraph Sample block model file used for the optimisation: https://github.com/m-r-v-n/pit-opt-igraph/blob/main/marvin_copper_final.csv Usage remains largely the same as in the original optimiser. The key difference is that it no longer requires explicit search boundary parameters for the X and Y axes. Instead, the spatial search area is now automatically calculated based on the num_blocks_above parameter. This makes the setup simpler and more intuitive, while still maintaining control over the vertical extent for the slope calculation 2. PyMaxflow PyMaxflow is a Python wrapper around a C++ implementation of the Boykov–Kolmogorov (BK) max-flow/min-cut algorithm, originally developed for image segmentation in computer vision — a problem that’s structurally quite similar to pit optimisation. Given its performance characteristics and specialisation, PyMaxflow is a strong candidate for building a robust, open-source Pit Optimiser. The BK algorithm isn’t always the fastest in theory, but it performs exceptionally well in practice on many sparse, grid-like graphs — which closely resemble mining block models. Like igraph, PyMaxflow is released under the GNU GPL license, making it fully open source and freely usable in both academic and commercial settings — as long as you comply with GPL terms. Usage You can find the repo for the pit optimiser using PyMaxflow here: https://github.com/m-r-v-n/pit-opt-pmf Sample block model file used for the optimisation: https://github.com/m-r-v-n/pit-opt-pmf/blob/main/marvin_pmf.csv Just like the igraph-based Pit Optimiser, usage is nearly identical to the previous Pseudoflow Pit Optimiser - the only required parameter for the search boundary calculation is num_blocks_above. Also, for the Pymaxflow input data, the index column in the block model file must start at 0 (zero-based indexing). If it starts at 1 or any other value, the optimiser will raise an error during graph construction or execution. Be sure to check and adjust your input data to prevent issues. Optimisation Result Below you can see the optimisation result from the Pseudoflow, igraph, and PyMaxflow. The Pseudoflow result is the one used from the previous article while the igraph and PyMaxflow result were made at a later time. All 3 were done in a free tier Google Colab All three implementations produced the same undiscounted cashflow, but the real differentiator was optimisation time: Pseudoflow: 31.76 seconds igraph (Push–Relabel): 2.19 seconds PyMaxflow (BK): 0.77 seconds Just a quick heads-up: all of these tests were run in a Python environment, so results may vary if you try them in a different setup like C++. Performance will also vary depending on the machine — for example, when I ran PyMaxflow multiple times in Colab, the optimisation time ranged anywhere from 0.3 s to 1 s. That said, even with that variability, PyMaxflow’s performance really stood out — and as a miner, I have to dig deeper (pun intended). A quick search online led me to a 2020 academic article exploring the use of the BK algorithm for ultimate pit limit optimisation — clear evidence that the algorithm is applicable beyond its original field. The PyMaxflow library has been publicly available in PyPi since 2014. This library is based on the 2004 version of the BK algorithm as described in: "An Experimental Comparison of Min-Cut/Max-Flow Algorithms for Energy Minimization in Vision," by Yuri Boykov and Vladimir Kolmogorov, published in IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), September 2004. So yeah, the BK algorithm has been around since 2004 — but did no one in mining (including me) actually notice? Or were we all just too busy mining our business? (Another pun intended.) Anyway, while the optimisation itself is impressively fast in Python, the real bottleneck isn’t the solver — it’s the graph construction, particularly the generation of precedence arcs. This step is both computationally intensive and memory-hungry, often accounting for the bulk of the total runtime. With tens or even hundreds of millions of arcs, it can quickly overwhelm your system’s RAM, making it difficult to run on a standard machine without encountering performance drops or crashes. This is especially true with my implementation, which includes support for variable slope angles — a feature that adds even more complexity to the arc-generation logic. If you find ways to optimise or improve it, I’d genuinely love to hear about it — feel free to reach out and share your improvements! PyMaxflow Limits I did some stress testing in Google Colab using the 300 GB RAM backend, and frustratingly, the process crashed once the number of generated arcs hit around 1 billion. I spent quite a bit of time assuming the issue was in my code, only to eventually uncover the real culprit: a 32-bit limitation that caps the number of arcs at 2³⁰ - 1. No matter how much memory you have, once you hit that threshold — it's game over. Now, theoretically, it might be possible to lift that cap by tweaking the library to support 64-bit indexing. Whether that’s practical or advisable… well, let’s just say there could be ways. But for most users, it’s probably better to manage arc count conservatively and stay well below the limit. Wrapping Up And I guess that’s it — for now. What started as a search for an open-source alternative quickly evolved into a deep dive into optimisation speed, algorithmic trade-offs, and performance tuning. From Pseudoflow to Push–Relabel to Boykov–Kolmogorov, it’s clear there’s more than one way to optimise a pit — and some are faster, lighter, and freer than others. But don’t think of the tools I’ve shared here as a finished product. Think of them like a car with stock parts — functional, (un)reliable, and ready to go. With the right tuning, upgrades, and creativity, you can turn it into a 10-second car. There’s still plenty of room for improvement, especially around memory usage and arc generation performance. If you find ways to optimise or extend it, I’d genuinely love to hear about it. What’s Next? Pit optimisation is only one piece of the mine planning puzzle. The next big step? Long-term scheduling — and that’s exactly what I’m diving into now. I’ve been digging into optimisation techniques for block sequencing beyond classic MIP — things like simulated annealing, tabu search, large neighborhood search, hybrid methods, and maybe even a bit of reinforcement learning. It’s a tougher challenge, but definitely an exciting one! And yes, it will be open source. Stay tuned — I’ll be sharing progress and updates soon! Until then, happy optimising!