A quicker than outer
Vectorization will help a lot:
a[x_?NumericQ] := N[Exp[-Abs[x]]];
x = Table[-10 + 0.02 (j - 1), {j, 1, 1001}];
A = Outer[a[#1 - #2] &, x, x]; // AbsoluteTiming
(* {2.11988, Null} *)
B = Exp[-Abs[x - #]] & /@ x; // AbsoluteTiming
(* {0.016182, Null} *)
A == B
(* True *)
Notice that I am doing arithmetic on vectors the size of x
instead of scalars. This is much faster than element-wise computation.
The idea is from Leonid's answer here:
- https://mathematica.stackexchange.com/a/21863/12
Outer
is highly optimized for several built-in functions (Plus
, Times
, List
). Therefore
Exp@-Abs@Outer[Plus, #, -#] &@Range[-10, 10, 0.02]; // RepeatedTiming
(* {0.025, Null} *)
gives ~50x speedup over Outer[#1 - #2&, #, #]
and ~15x speedup over Outer[Subtract, #, #]
. Also is a bit faster then Kuba's Exp[-Abs[x - # & /@ x]]
.
Yes, two things help. The first is that Subtract
is going to execute faster than #1 - #2 &
, and the other is that all the operations involved in a
are Listable
, so getting rid of that _?NumericQ
restriction speeds things up greatly. On my computer, this amounts to an order of magnitude speedup:
With[{x = Table[-10 + 0.02 (j - 1), {j, 1, 1001}]},
Outer[a[#1 - #2] &, x, x]]; // AbsoluteTiming
(* {2.29455, Null} *)
With[{x = Table[-10 + 0.02 (j - 1), {j, 1, 1001}]},
a[Outer[Subtract, x, x]]]; // AbsoluteTiming
(* {0.213449, Null} *)