Linear least-squares system pop up everywhere, and there are many fast way to solve them. We’ll be looking at one such way: GMRES.
Ah, the classical “first post”, often the only post in the blog. Let us hope this is not the case.
I feel like I should write down some things about my side projects. I tried using Medium, but it has two significant problems: the platform feels too monetized, and it really doesn’t work very well together with LaTeX. Since I like to think a lot in mathematical terms, having good LaTeX support is just essential.
This website is made using Jekyll and hosted on github-pages. I don’t know how good this is, but we shall see. At least like it better.
I hope to shortly make a couple posts about recent projects:
- Bayesian analysis of exam grades (I posted this over on Github, but since then I have done significant work on the subject)
- Analysis of moodle-logs
- Analysis of my last.fm scrobble history
- Analysis of ISU figure skating scores, and proving statistically the fact that judging is biased.
- How to scrape data from pdf files (using the ISU scores as example)
We recently made a paper about supervised machine learning using tensors, here’s the gist of how this works.
A lot of data is naturally of ‘low rank’. I will explain what this means, and how to exploit this fact.
Parsing and editing Word documents automatically can be extremely useful, but doing it in Python is not that straightforward.
Finally, let’s look at how we can automatically sharpen images, without knowing how they were blurred in the first place.
Deconvolving and sharpening images is actually pretty tricky. Let’s have a look at some more advanced methods for deconvolution.
In order to automatically sharpen images, we need to first understand how a computer can judge how ‘natural’ an image looks.
Deconvolution is one of the cornerstones of image processing. Let’s take a look at how it works.