Title: Neural Optimizer: Deep Learning Approaches for Combinatorial Optimization
Authors: Zhang Wei, Li Ming, Wang Xiaoli
Conference: ICML 2024 (International Conference on Machine Learning)
Abstract: This work presents a novel neural architecture for solving combinatorial optimization problems. Our approach combines graph neural networks with reinforcement learning to achieve state-of-the-art performance on various optimization benchmarks.
Key Contributions:
Performance improvement over state-of-the-art
Faster convergence compared to traditional methods
Benchmark datasets evaluated on
Comprehensive evaluation across multiple optimization problems and datasets.
Achieved 15% improvement on TSPLIB benchmarks with up to 1000 cities.
Reduced total route distance by 12% on standard VRP datasets.
Improved makespan by 18% on benchmark scheduling instances.
Found optimal colorings for 95% of test graphs within time limit.
Achieved optimal solutions for 98% of large-scale knapsack instances.
Reduced number of bins by 8% compared to best known heuristics.
Our neural optimizer was trained on a diverse set of optimization problems using:
Method | TSP-100 | TSP-500 | TSP-1000 | Training Time |
---|---|---|---|---|
Nearest Neighbor | 12.3 | 15.8 | 18.2 | - |
Genetic Algorithm | 8.7 | 11.2 | 14.1 | - |
Previous Neural Method | 7.2 | 9.8 | 12.5 | 24h |
Our Method | 6.1 | 8.3 | 10.6 | 18h |
We conducted extensive ablation studies to understand the contribution of each component: