Blogging about stuff on the intersection between mathematical optimization, machine learning, and software engineering. Ph.D, research scientist at Verizon Media. Follow me on Twitter for updates. More about me.
-
Shape restricted function models
-
Mini-batching with in-memory datasets
-
Fun with sparsity in PyTorch via Hadamard product parametrization
-
Untilting the tilted loss
-
Regularization properties of polynomial bases
-
A Bernstein SkLearn model calibrator
-
SkLearning with Bernstein Polynomials - continued
-
“SkLearning with Bernstein Polynomials"
-
“Keeping the polynomial monster under control"
-
“Are polynomial features the root of all evil?"
-
“Proximal Point to the Extreme - Factorization Machines"
-
“Proximal Point with Mini Batches - Regularized Convex On Linear"
-
“Proximal Point with Mini Batches - Convex On Linear"
-
“Proximal Point with Mini Batches"
-
“Selective approximation - the prox-linear method for training arbitrary models"
-
“Proximal Point is, after all, yet another gradient method"
-
“Proximal Point - regularized convex on linear II"
-
“Proximal Point - regularized convex on linear I"
-
“Proximal point - convex on linear losses"
-
“Proximal Point - warmup"