Optimizing COUNT(DISTINCT) slowness even with covering indexes

If the query without the count(distinct) is going much faster, perhaps you can do nested aggregation:

SELECT log_type, log_date,
       count(*) AS COUNT, sum(cnt) AS total
FROM (SELECT log_type,
             DATE_FORMAT(CONVERT_TZ(log_time,'+00:00','+05:30'),'%l %p') AS log_date,
             subscriber_id, count(*) as cnt
      FROM stats.campaign_logs USE INDEX(campid_domain_logtype_logtime_subid_index)
      WHERE DOMAIN = 'xxx' AND
            campaign_id = '123' AND
            log_type IN ('EMAIL_SENT', 'EMAIL_OPENED', 'EMAIL_CLICKED') AND
            log_time BETWEEN CONVERT_TZ('2015-02-01 00:00:00','+00:00','+05:30') AND 
                             CONVERT_TZ('2015-03-01 23:59:58','+00:00','+05:30')
      GROUP BY log_type, log_date, subscriber_id
     ) l
GROUP BY logtype, log_date;

With a bit of luck, this will take 2-3 seconds rather than 50. However, you might need to break this out into subqueries, to get full performance. So, if this does not have a significant performance gain, change the in back to = one of the types. If that works, then the union all may be necessary.

EDIT:

Another attempt is to use variables to enumerate the values before the group by:

SELECT log_type, log_date, count(*) as cnt,
       SUM(rn = 1) as sub_cnt
FROM (SELECT log_type,
             DATE_FORMAT(CONVERT_TZ(log_time,'+00:00','+05:30'),'%l %p') AS log_date,
             subscriber_id,
             (@rn := if(@clt = concat_ws(':', campaign_id, log_type, log_time), @rn + 1,
                        if(@clt := concat_ws(':', campaign_id, log_type, log_time), 1, 1)
                       )
              ) as rn
      FROM stats.campaign_logs USE INDEX(campid_domain_logtype_logtime_subid_index) CROSS JOIN
           (SELECT @rn := 0)
      WHERE DOMAIN = 'xxx' AND
            campaign_id = '123' AND
            log_type IN ('EMAIL_SENT', 'EMAIL_OPENED', 'EMAIL_CLICKED') AND
            log_time BETWEEN CONVERT_TZ('2015-02-01 00:00:00', '+00:00', '+05:30') AND 
                             CONVERT_TZ('2015-03-01 23:59:58', '+00:00', '+05:30')
      ORDER BY log_type, log_date, subscriber_id
     ) t
GROUP BY log_type, log_date;

This still requires another sort of the data, but it might help.


To answer your question:

Is there a way to refactor my query, or add an index, or something, to get the COUNT(DISTINCT...) values, but much faster?

Yes, do not group by the calculated field (do not group by the result of the function). Instead, pre-calculate it, save it to the persistent column and include this persistent column into the index.

I would try to do the following and see if it changes performance significantly.

1) Simplify the query and focus on one part. Leave only one longest running SELECT out of the three, get rid of UNION for the tuning period. Once the longest SELECT is optimized, add more and check how the full query works.

2) Grouping by the result of the function doesn't let the engine use index efficiently. Add another column to the table (at first temporarily, just to check the idea) with the result of this function. As far as I can see you want to group by 1 hour, so add column log_time_hour datetime and set it to log_time rounded/truncated to the nearest hour (preserve the date component).

Add index using new column: (domain, campaign_id, log_type, log_time_hour, subscriber_id). The order of first three columns in the index should not matter (because you use equality compare to some constant in the query, not the range), but make them in the same order as in the query. Or, better, make them in the index definition and in the query in the order of selectivity. If you have 100,000 campaigns, 1000 domains and 3 log types, then put them in this order: campaign_id, domain, log_type. It should not matter much, but is worth checking. log_time_hour has to come fourth in the index definition and subscriber_id last.

In the query use new column in WHERE and in GROUP BY. Make sure that you include all needed columns in the GROUP BY: both log_type and log_time_hour.

Do you need both COUNT and COUNT(DISTINCT)? Leave only COUNT first and measure the performance. Leave only COUNT(DISTINCT)and measure the performance. Leave both and measure the performance. See how they compare.

SELECT log_type,
       log_time_hour,
       count(DISTINCT subscriber_id) AS distinct_total,
       COUNT(subscriber_id) AS total
FROM stats.campaign_logs
WHERE DOMAIN='xxx'
  AND campaign_id='123'
  AND log_type = 'EMAIL_OPENED'
  AND log_time_hour >= '2015-02-01 00:00:00' 
  AND log_time_hour <  '2015-03-02 00:00:00'
GROUP BY log_type, log_time_hour