Author Archives: Mark Fenner

Sorting Friends and Permutations

Introduction

You may be familiar with permuations: a rearrangement of a sequence. If you are, you probably know that for a sequence of length \(n\), there are \(n!\) arrangements (if the elements are all distinct – duplicates will reduce the number of unique rearrangements). There are many, many uses of permutations and many specific permutations of interest.

Continue reading

Some Iron Condor Heuristics

In the world of Iron Condors (an option position composed of short, higher strike (bear) call spread and a short, lower strike (bull) put spread), there are two heuristics that are bandied about without much comment. I’m generally unwilling to accept some Internet author’s word for something (unless it completely gels with everything I know), so I set out to convince myself that these heuristics held. Here are some demonstrations (I hesitate to use the word proof) that convinced me. Continue reading

Going All Meta (Part 2) – Some Python-Fu

In a previous post (a long, long time ago), I said I was going to talk about metaclasses (or at least show an abuse of them) in Python. I am going to get to that, but I want to set the stage by talking about another topic that isn’t nearly as black-magic-y: decorators. When I’m teaching or training, people commonly ask about decorators because they have seen them and they are confused by them – mainly because a common type of decorator is a function that takes in, modifies, and returns a different function. Huh. Back to meta-ville.

Simply put, a decorator is a Python function with some special characteristics. That Python function takes a single, lonely input. The decorator can either be a function-decorator or a class-decorator. In the first case, the decorator take a function as its input and produces a (modified) function as its output. In the second case, it takes a class as its input and produces a (modified) class as its output.

The raw notebook: Decorator Fun – Function Timing (raw)

As seen by nbviewer: Decorator Fun – Function Timing (through nbviewer)

Some Iron Condor Heuristics

I’m still debating the best way to work with ipython notebooks in this blog. However, until I come to “final” answer (which might be going away from wordpress and going with a github-pages/pellican solution, ala Jake VdP), I’m just going to (hopefully) upload the notebooks and link to them via nbviewer. Here goes:

The raw notebooks: Two Iron Condor Heuristics (Raw)

As seen by nbviewer: Two Iron Condor Heuristics (Through nbviewer)

Going All Meta (Part 1)

Meta note on a meta post:  this is my first Python code post (I think).  Getting the highlighting was trivial:  in WordPress I installed and activated the SyntaxHighlighter Evolved plugin.  And “go!”  This gives you a square bracket tag (shortcode) [[code language=”python] … [/code]]. And a last meta comment, to display shortcodes in a post, you can use an extra set of brackets to enclose the entire WordPress shortcode start and end block like this: [[[code] … [/code]]]. For those keeping track, I had to use double outer brackets (in addition to the brackets on the code tags) in my WordPress text to get that to show up for you. Also, the “Visual” editor borks this badly.

One of the black magic corners of Python is the use of metaclasses.  Since other folks have written extensively on what they are, I’m going to focus on one use (abuse?) of them.  Here are some reference links on classes, metaclasses, and types in python:

And a quick sample of python code. Actual code will come with the next post.

print "hello world"

Stomping da’ Moon

With about 6″ of snow fall in the past 24 hours, I had a great opportunity to do some stomping (my term for the clunky snow shoeing) at da’ Moon.  I really do want to write about something other than my outdoor clothing choices.  Mostly, I want to write about something else that is near and dear to me — training.  But, until then.

It turns out that medium socks (a heavier Smartwool pair), gaiters, gym pants, snow pants, a thermal shirt (“heavy, just-over-base layer” shirt), and my Patagonia Guide Softshell (with ski gloves, of course) was basically too heavy for mid-20s (mid-10s/windchill) and overcast.  I didn’t really think I was trucking, but I did cover a fair bit of ground in 1:15 or so.

I’ll close with a reminder (last mentioned on a long dead cs.pitt.edu blog) that the reason I adore snow shoeing is that I can bushwack just about anywhere.  The leaves are down.  The ground cover is carpeted with snow.  And, short of dense gaggles of branches, trees, or scrub (prickers being the only real possible “problem”), you can walk straight lines up, down, and across just about anything.

This is ultra-cool when you spend a lot of time, on a mountain bike, following pre-laid track.

Another Clothing Note

Just a quick note on winter clothing to go along with my prior post:  lower 40s and very humid/muggy/*damp*.  Riding boots, thick socks, light tights, baggy Fox shorts + chamois.  Started with medium fleece, dropped it after the first real climb.  Wore two long sleeve shirts (one light baselayer, one long sleeve downhill style jersey).  Started with skull cap plus urban helmet.  Ditched skullcap after about five minutes.  Overall, started too warm, but I didn’t want to get chilled with the dampness.  Once I warmed up though, I was quite toasty.  I also spent most of the ride pounding in my big ring.

The temperature’s a droppin’. The riding continues …

I had a great ride, mountain biking, at ‘da Moon this evening.  It was a brisk (cold) late November day:  I started at 4:00 and rode until 5:30.  The temperature was ~32F (measured at Courtdale via iPhone weather app).  Towards the, the wind picked up a bit.  The sunset was 4:45 at Kington and I swapped my semi-brown shades for my helmet light around 5:00.

Continue reading

NYC 27-Hour Date

I had the most wonderful opportunity to explore NYC for a day with my wife, MrsDrFenner.  We certainly made the best of it.  We met at Grand Central Station (yay for meeting there and not saying good-bye) and walked to The Morgan.  The Morgan had been recommended to me by my closest undergraduate mathematics professor, who happened to teach me about Euclid, Plato, and mathematical Probability & Statistics.  When we (TheDrsFenner) visited our alma mater (Allegheny College) for a reunion weekend, we got to eat dinner with Dr. LoBello and he advised us to go to the Morgan.  It was very good advice.  MrsDrFenner said she was more in awe at The Morgan than she was at MoMA (in fairness, she didn’t wait in line for the Magritte exhibit).

I was personally in awe of some of the letters of historical and literary significance that were on display.  However, I almost fell over when I saw a copy of Bryne’s Euclid‘s, open and displaying (I think) the 7th proposition (i.e., a theorem) of Book I.  Had it been open to the 47th proposition, I would have fallen right over.  #47 is the Pythagorean theorem.  I’ll try to remember to link a picture of me, beside the Bryne.  Seeing it reminded me that I’d like to take the online images for Book I and print them on a poster.  I’m not sure about sizing; I’m hoping pdfjam will make the project tolerable.  We also saw nice exhibits of Leonardo DeVinci and Edgar Allen Poe.

As we strolled out, we ducked in a coffee shop (Lucid?) for due espressi.  For there, we headed to dinner at The Cannibal.  The atmosphere was young, trendy, and communal.  Shared tables were the order of the day and it worked nicely.  There was a nice variety of beer (although, there weren’t too many must have’s for me — checking again, I see a Hill Farmstead on tap that I would have attacked).  We did really enjoy some beer-cocktails.  And the tandoori lamb belly (which might do better marketed as tandoori lamb ribs) was massively succulent.  I probably won’t get it again.  But it was great to try once!  The watermelon-cilantro-hot pepper salad really worked well to cut through the fat and provide a clean counterpoint to the heaviness of the succulent belly.

Our dinner done, we headed to two bars.  The first, Middle Branch, had a speak easy feel without requiring a password.  You do need to know where to look.  Good drinks and great atmosphere.  We really appreciated the standing room downstairs and the (uncrowded) seating area upstairs.  My riff on a Negroni (with muddled grapes) was definitely worthwhile (I’m a big Negroni and Negroni-template-riff fan).  MrsDrFenner needed something light to help her get past the heavy dinner:  our server read her mind and brought a cucumber gimlet like drink that fit the bill.  One and done:  we wanted to find some live jazz.  Which we did at Measure.  We grabbed a specialty cocktail (or two) and then transitioned to some fizzy water.  Our stomachs were in some dire need of help.

Having satisfied the requisite need to “paint the town red”, we strolled back to Grand Central and hoped a train to the Financial District (where my hotel for the meeting is located).  We decided to try for some good NYC brunch in the AM.  We took a good bit of a walk to get to Prune.  Bustling and tiny, the food was great.  We both couldn’t refuse hollandaise (on eggs Benedict), but we were disappointed that we couldn’t get bloody mary’s before noon (I guess it’s a NY state liquor board thing — maybe only on Sunday?).  MrsDrFenner pointed out that the liquor board needs to offer a “clarification” that “of course, such laws don’t apply to Mimosas and Bloody Mary’s”.  Until then, do your research.

Our last main stops were Central Park, a hint of shopping (Athleta in person?!?), and a bite to eat before rolling to the Port Authority (Bus Terminal) and heading back to the Valley.  Central Park was a big win.  My first (naive) thought was:  they have rocks here!  That is, rocks big enough to make a 6 year delight in running up, down, over, and around them.  With hidden paths to explore everywhere.  We started in the area called The Ramble and it was a great, strolling treat.  It helped that the rain held off until we were on the subway to the PABT.

PyData NYC Nov. 2013

PyData 2013 NYC was a pretty great time.  It is always fun to meet folks as passionate about your favorite tools as you are.  There’s probably too much to really mention, but I definitely want to throw together a few of my thoughts and ideas.  Without futher ado …

Some of the talks I went to:

  • Travis talking about conda (and blog post and blog post).  While I’m an admitted gentoo fanboy (actually, I don’t fan at all; I just use it), having a lighter weight option for the Python eco-system (across *nix (including OSX) and Windows) is really nice.  If I would have realized a few things about conda last year (I’m not sure how far along it was, at the right time point), I might have used it for some internal code deployment.
  • Yves talking about Performance Python (and an ipython notebook of the same; some other talk material is at his website).  Not much here was new to me — but — being reminded of the fundamentals and low-hanging fruit is always good.
  • Dan Blanchard talking about skll (and a link to the talk).  skll seems to take care of several procedural meta-steps in scikit-learn programs:  train/test/CV splits and model parameter grid searches.
  • Thomas Wiecki talking about pymc3 (most of the talk material shows up in the pymc3 docs; he also mentioned Quantopian’s zipline project and he has a few interesting git repos).
  • Peter Wang’s keynote was insightful, thought provoking, and not the typical painful keynote that has you checking email the whole time.  He mentioned a Jim Gray paper that seems worthwhile.  By reputation, everything Jim Gray did was worthwhile.  [Gray disappeared while sailing a few years back.]

A thought that I’ve had over the years and that I’d love to see come to (ongoing) completion is some sort of CI job (continuous integration) that grabs the main Python learning systems, builds them, and runs [some|many|most|all] of the learning algorithms on synthetic, random, and/or standard (UCI, kaggle, etc.) datasets.  Of course, we would measure resource usage (time/memory) and error rates.  While the time performance is what would really get most people interested (and also cause the most dissent:  you weren’t fair to XYZ), I’m more interested in verifying that random forest in scikit-learn and orange give marginally similar results.  Throwing in some R and matlab options would give some comparison to the outside world, as well.

Doing these comparisons in the right way has a number of difficulties, as I discussed with Jake VanderPlas.  In just a few minutes, we were worried about data format differences (less important for numpy based alternatives, Orange uses its own ExampleTable — which you can convert to/from numpy arrays), default and hard-coded parameters (possibly not being able to compare equivalent models), and social issues.