Replacing NAs in R with nearest value
Here is a very fast one. It uses findInterval
to find what two positions should be considered for each NA
in your original data:
f1 <- function(dat) {
N <- length(dat)
na.pos <- which(is.na(dat))
if (length(na.pos) %in% c(0, N)) {
return(dat)
}
non.na.pos <- which(!is.na(dat))
intervals <- findInterval(na.pos, non.na.pos,
all.inside = TRUE)
left.pos <- non.na.pos[pmax(1, intervals)]
right.pos <- non.na.pos[pmin(N, intervals+1)]
left.dist <- na.pos - left.pos
right.dist <- right.pos - na.pos
dat[na.pos] <- ifelse(left.dist <= right.dist,
dat[left.pos], dat[right.pos])
return(dat)
}
And here I test it:
# sample data, suggested by @JeffAllen
dat <- as.integer(runif(50000, min=0, max=10))
dat[dat==0] <- NA
# computation times
system.time(r0 <- f0(dat)) # your function
# user system elapsed
# 5.52 0.00 5.52
system.time(r1 <- f1(dat)) # this function
# user system elapsed
# 0.01 0.00 0.03
identical(r0, r1)
# [1] TRUE
I like all the rigorous solutions. Though not directly what was asked, I found this post looking for a solution to filling NA values with an interpolation. After reviewing this post I discovered na.fill on a zoo
object(vector, factor, or matrix):
z <- c(1,2,3,4,5,6,NA,NA,NA,2,3,4,5,6,NA,NA,4,6,7,NA)
z1 <- zoo::na.fill(z, "extend")
Note the smooth transition across the NA values
round(z1, 0)
#> [1] 1 2 3 4 5 6 5 4 3 2 3 4 5 6 5 5 4 6 7 7
Perhaps this could help
Code below. The initial question was not totally well-defined, I had asked for these clarifications:
- Is it guaranteed that at least the first and/or last entries are non-NA? [No]
- What to do if all entries in a row are NA? [Leave as-is]
- Do you care how ties are split i.e. how to treat the middle NA in
1 3 NA NA NA 5 7
? [Don't-care/ left] - Do you have an upper-bound (S) on the longest contiguous span of NAs in a row? (I'm thinking a recursive solution if S is small. Or a dataframe solution with
ifelse
if S is large and number of rows and cols is large.) [worst-case S could be pathologically large, hence recursion should not be used]
geoffjentry, re your solution your bottlenecks will be the serial calculation of nearest.non.na.pos
and the serial assignment dat[na.pos] <- dat[non.na.pos[nearest.non.na.pos]]
For a large gap of length G all we really need to compute is that the first (G/2, round up) items fill-from-left, the rest from right. (I could post an answer using ifelse
but it would look similar.)
Are your criteria runtime, big-O efficiency, temp memory usage, or code legibility?
Coupla possible tweaks:
- only need to compute
N <- length(dat)
once - common-case speed enhance:
if (length(na.pos) == 0)
skip row, since it has no NAs if (length(na.pos) == length(dat)-1)
the (rare) case where there is only one non-NA entry hence we fill entire row with it
Outline solution:
Sadly na.locf does not work on an entire dataframe, you must use sapply, row-wise:
na.fill_from_nn <- function(x) {
row.na <- is.na(x)
fillFromLeft <- na.locf(x, na.rm=FALSE)
fillFromRight <- na.locf(x, fromLast=TRUE, na.rm=FALSE)
disagree <- rle(fillFromLeft!=fillFromRight)
for (loc in (disagree)) { ... resolve conflicts, row-wise }
}
sapply(dat, na.fill_from_nn)
Alternatively, since as you say contiguous NAs are rare, do a fast-and-dumb ifelse
to fill isolated NAs from left. This will operate data-frame wise => makes the common-case fast. Then handle all the other cases with a row-wise for-loop. (This will affect the tiebreak on middle elements in a long span of NAs, but you say you don't care.)
I can't think of an obvious simple solution, but, having looked at the suggestions (particularly smci's suggestion of using rle
) I came up with a complicated function that appears to be more efficient.
This is the code, I'll explain below:
# Your function
your.func = function(dat) {
na.pos <- which(is.na(dat))
if (length(na.pos) == length(dat)) {
return(dat)
}
non.na.pos <- setdiff(seq_along(dat), na.pos)
nearest.non.na.pos <- sapply(na.pos, function(x) which.min(abs(non.na.pos - x)))
dat[na.pos] <- dat[non.na.pos[nearest.non.na.pos]]
dat
}
# My function
my.func = function(dat) {
nas=is.na(dat)
if (!any(!nas)) return (dat)
t=rle(nas)
f=sapply(t$lengths[t$values],seq)
a=unlist(f)
b=unlist(lapply(f,rev))
x=which(nas)
l=length(dat)
dat[nas]=ifelse(a>b,dat[ ifelse((x+b)>l,x-a,x+b) ],dat[ifelse((x-a)<1,x+b,x-a)])
dat
}
# Test
n = 100000
test.vec = 1:n
set.seed(1)
test.vec[sample(test.vec,n/4)]=NA
system.time(t1<-my.func(test.vec))
system.time(t2<-your.func(test.vec)) # 10 times speed improvement on my machine
# Verify
any(t1!=t2)
My function relies on rle
. I am reading the comments above but it looks to me like rle
works just fine for NA
. It is easiest to explain with a small example.
If I start with a vector:
dat=c(1,2,3,4,NA,NA,NA,8,NA,10,11,12,NA,NA,NA,NA,NA,18)
I then get the positions of all the NAs:
x=c(5,6,7,8,13,14,15,16,17)
Then, for every "run" of NAs I create a sequence from 1 to the length of the run:
a=c(1,2,3,1,1,2,3,4,5)
Then I do it again, but I reverse the sequence:
b=c(3,2,1,1,5,4,3,2,1)
Now, I can just compare vectors a and b: If a<=b then look back and grab the value at x-a. If a>b then look ahead and grab the value at x+b. The rest is just handling the corner cases when you have all NAs or NA runs at the end or the start of the vector.
There is probably a better, simpler, solution, but I hope this gets you started.