Monte (python) is a Python framework for building gradient based learning
machines, like neural networks, conditional random fields, logistic
Monte contains modules (that hold parameters,
a cost-function and a gradient-function) and trainers
(that can adapt a module's parameters by minimizing its cost-function
on training data).
Modules are usually composed of other modules, which can in turn contain other modules, etc. Gradients of decomposable systems like these can be computed with back-propagation.
RequirementsMonte requires Python (2.4 or later) and the packages numpy (Version 1.0 or later), scipy (Version 0.5.1 or later) and matplotlib (Version 0.87 or later) to be fully functional. (Installing those packages is advisable in any case, if you are ever planning to do any serious data-analysis, machine learning, or numerical processing with Python.)
Example: Training a neural network
from numpy import arange, newaxis, sin from pylab import randn, plot, scatter, hold from monte.models.neuralnet import NNIsl import monte.train mynn = NNIsl(1,10,1) #neural network with one input-, one output-, #and one hidden layer with 10 sigmoid units mytrainer = monte.train.Conjugategradients(mynn,10) inputs = arange(-10.0,10.0,0.1)[newaxis,:] #produce some inputs outputs = sin(inputs) + randn(1,inputs.shape) #produce some outputs testinputs = arange(-10.5,10.5,0.05)[newaxis,:] #produce some test-data testoutputs = sin(testinputs) for i in range(50): hold(False) scatter(inputs,outputs) hold(True) plot(testinputs,mynn.apply(testinputs)) mytrainer.step((inputs,outputs),0.0001) print mynn.cost((inputs,outputs),0.0001)
DocumentationMonte documentation is being written incrementally. The latest pdf of the documentation can be found here
ContactIf you want to contribute or have questions, send mail to: monte[at]cs.toronto.edu
Monte was written by Roland Memisevic
PhilosophyWhile Monte contains some simple implementations of a variety of learning methods (such as neural networks, k-nearest neighbors, logistic regression, kmeans, and others), Monte's focus is on gradient based learning of parametric models. The simple reason for this is that these methods keep turning out to be the most useful in applications -- especially when dealing with large datasets.
Monte is not yet another wrapper around some C++ SVM library. Python comes with powerful numerical processing facilities itself these days, and a lot of interesting machine learning is possible in pure Python.
Monte's design philosophy is inspired mainly by the amazing gradient based learning library available for the lush-language, written by Yann LeCun and Leon Bottou. The idea of that library is to use trainable components (objects) that have 'fprop-' and 'bprop-'methods, and to build complicated architectures by simply sticking these together. Error derivatives can then be computed easily using back-propagation. Modular design based on back-propagation relies on the observation that back-propagation (despite the common misguided claims to the contrary) is not just "an application of the chain-rule".