Chunky palindromes
Pyth, 40 34 27 22 bytes
Lhsmy:bd_df!xb>TbS/lb2
Try it in the online interpreter.
Heavily golfed down from initial 40 byte version. Thanks to FryAmTheEggman for pointing out a couple of useful operators (docs are hard to search!) that have saved me 6 bytes total. Thanks to Dennis for a clever single byte saving by interpreting the result of x
as a truthy/falsy value rather than an index - !xb>Tb
rather than q<bT>Tb
.
How it works:
We define a function y
which determines the chunkiness of a string b
by recursively calling itself on substrings of b
. Functions are automatically memoized in Pyth, so recursion has very little overhead in terms of time.
L def y(b): return ___
S/lb2 The range [1,2,...,len(b)/2]
f!xb>Tb Filter n for which b[:n] == b[-n:]
m Map each n in the list to...
y:bd_d y(b[d:-d])
hs Take the sum and add one (implicit return)
CJam (41 39 bytes)
qM{_,2/,\f{\~_2$>@2$<@~)/(@=\M*j*}1b)}j
Online demo
This is "eager" in the sense that it finds the number of chunky palindromes for each "central" string (i.e. the result of removing the same number of characters from both ends of the original string), but because it uses the auto-memoising j
operator each is calculated once only, giving a very fast program (and saving a few characters over a non-memoised implementation).
Thanks to Dennis for a one byte saving.