Get a word's individuality!
Bash, 41, 39, 34, 33, 26 bytes
EDIT:
- Converted from function to a script
- One byte off by removing the ignore case flag
- Replaced wc -l with grep -c, saving 5 bytes. Thanks @Riley !
A rather trivial solution in bash + coreutils
Golfed
bc -l<<<`grep -c $1`/${#1}
Test
>cat /usr/share/dict/words| ./test ulous
7.60000000000000000000
>grep -i ulous /usr/share/dict/words | wc -l
38
Python 3, 52 49 bytes
-3 bytes thanks to Kade, for assuming w
to be the word list as list:
f=lambda s,w:w>[]and(s in w[0])/len(s)+f(s,w[1:])
Previous solution:
lambda s,w:sum(s in x for x in w.split('\n'))/len(s)
Assumes w
to be the word list. I choose Python 3 because in my word list there are some Non-ASCII chars and Python 2 does not like them.
Perl 6, 45 36 33 32 bytes
wordlist as a filename f
, 45 bytes
->$w,\f{grep({/:i"$w"/},f.IO.words)/$w.chars}
wordlist as a list l
, 36 bytes
->$w,\l{grep({/:i"$w"/},l)/$w.chars}
using placeholder variables, and reverse (R
) meta-operator, 33 bytes
{$^w.chars R/grep {/:i"$w"/},$^z}
using .comb
to get a list of characters, rather than .chars
to get a count, 32 bytes
{$^w.comb R/grep {/:i"$w"/},$^z}
Expanded:
{ # block lambda with placeholder parameters 「$w」 「$z」
$^w # declare first parameter ( word to search for )
.comb # list of characters ( turns into count in numeric context )
R[/] # division operator with parameters reversed
grep # list the values that match ( turns into count in numeric context )
{ # lambda with implicit parameter 「$_」
/ # match against 「$_」
:i # ignorecase
"$w" # the word as a simple string
/
},
$^z # declare the wordlist to search through
#( using a later letter in the alphabet
# so it is the second argument )
}