Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have used the Z3 solver(https://github.com/Z3Prover/z3) to optimize a job scheduling problem. Roughly, I have ~20k jobs, defined by N tasks (~1-50). A job is complete only if M tasks (typically 2-3) per job are done. Every task takes some variable amount of time to complete and can only be run during specific windows. Given the equipment can only perform P tasks in parallel, what is the optimal configuration of tasks (~500k) that will allow us to complete the most jobs?

The search space for this problem is enormous, but Z3 will produce an impressive result within a few minutes.



amazing, I tried doing this recently but I guess my modeling was not good enough and it would take hours to find the solution with a couple of dozen variables. Do you have any pointers on how to much these types of problems with a SMT/SAT solver efficiently?


The documentation and API quirks of z3 also led me down some performance sinks before I was able to tackle the larger models. Not a Z3/SMT expert, but what worked for me was to model every time tick as the sum of all tasks that could be performed at that instant. Set a constraint on each tick that sets the sum of these tasks to be <= P. Add the book-keeping on the task-job relationships and that was it.

Using the Python bindings, the biggest performance impact I discovered was ensuring that my jobs and tasks were represented as z3.Bool and not z3.Int (requiring a roundabout way to do a summation on z3.Bools). Something like:

  tasks_this_tick = [(z3.Bool(task_id), 1) for task_id in current_tasks]
  model.add(z3.PbLe(tasks_this_tick, max_tasks_per_tick + 1)




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: