DictionaryLookup very slow
If you're seeking to find the number of words from each language that are represented, you can simplify your code (although you lose the ability to parallelize the search process).
Using the following sample list of words:
wordList = {"ab", "aba", "abá", "abaá", "abab", "ababa", "abábades",
"ababillarse", "ababol", "abaca", "abacá", "abacà", "abacais",
"abacallan", "abacallana", "abacallanà", "abacallanada", "abacas",
"abacate", "abacateiro", "abacateiros", "abacates", "abacaxi",
"abacaxis", "abacht", "abaci", "aback", "abactis", "abactissen",
"abactor", "abacus", "abacuses", "abad", "abadański", "abadenn",
"abadennad", "abadh", "abádszalóki", "abafájai", "abafalai",
"abafalvai", "abaft", "abaich", "abaid", "abaissa", "abaissaient",
"abaissais", "abaissait", "abaissant", "abak", "abaka",
"abakańczyk", "abako", "abakus", "abakusen", "abakusens",
"abakuser", "abakuserna", "abalienato", "abänderlich", "abändern",
"abandon", "abandonner", "abandonnér", "abandonnere",
"abandonnerede", "abarbeiten", "abartig", "abate", "abætere",
"abati", "abato", "abavus", "abba", "abbabarn", "abbabarna",
"abbabarnanna", "abbabarni", "abbacinati", "abbagli", "abbaglia",
"abdicirah", "abdicirahu", "abdicirala", "abdicirali", "abdiciram",
"abdiki", "abdomeno", "abela", "abessiivi", "abessiivia",
"abessiivimuotojen", "abessiivin", "abi"};
We can then code as follows:
wordListLanguage = DictionaryLookup[{All, wordList}];
Tally[#[[1]] & /@ wordListLanguage]
(*{{"BrazilianPortuguese", 11}, {"Breton", 7}, {"BritishEnglish", 8}, {"Catalan", 8}, {"Croatian", 5}, {"Danish", 5}, {"Dutch", 5}, {"English", 7}, {"Esperanto", 5}, {"Faroese", 5}, {"Finnish", 5}, {"French", 7}, {"Galician", 12}, {"German", 5}, {"Hungarian", 6}, {"IrishGaelic", 5}, {"Italian", 5}, {"Latin", 5}, {"Polish", 7}, {"Portuguese", 8}, {"ScottishGaelic", 5}, {"Spanish", 7}, {"Swedish", 6}}*)
For my sample word list, the search took 22.9 seconds. I'm not sure if it represents a performance improvement over your code. However, this compares favourably to Anon's code (123.6 s) as implemented as follows:
{#, Length@Intersection[wordList, DictionaryLookup[{#, All}]]} & /@ DictionaryLookup[All]
Of course, you should be able to parallelize the above process, but it appears that each of the kernels you start will download the data from Wolfram again.
Using Intersection
as suggested by Anon seems to give good performance, provided I exclude Hebrew from the languages.
First define the list of all languages except Hebrew, and get the complete dictionary for each language:
lang = DeleteCases[DictionaryLookup[All], "Hebrew"];
allwords = DictionaryLookup[{#, __}] & /@ lang;
Now make a list of 45000 randomly chosen words:
wordList = RandomChoice[Flatten @ allwords, 45000];
The computation takes about 10 seconds:
Timing [Length[Intersection[wordList, #]] & /@ allwords]
(* {10.094, {394, 5752, 367, 1763, 6793, 2530, 4004, 2408, 1850, 212,
1232, 6580, 1645, 7273, 762, 121, 2249, 701, 1367, 142, 2565, 7802,
286, 176, 1389, 1415}} *)
If I don't remove Hebrew from the list of languages it is far slower. I have no idea why this might be.
For word lists which may contain repeated words, you could use this:
wordcounts = Dispatch[Rule @@@ Tally[wordList]];
Total[Intersection[wordList, #] /. wordcounts] & /@ allwords