Operation Time Out Error in cqlsh console of cassandra
count(*) actually pages through all the data. So a select count(*) from userdetails
without a limit would be expected to timeout with that many rows. Some details here:
http://planetcassandra.org/blog/counting-key-in-cassandra/
You may want to consider maintaining the count yourself, using Spark, or if you just want a ball park number you can grab it from JMX.
To grab from JMX it can be a little tricky depending on your data model. To get the number of partitions grab the org.apache.cassandra.metrics:type=ColumnFamily,keyspace={{Keyspace}},scope={{Table}},name=EstimatedColumnCountHistogram
mbean and sum up all the 90 values (this is what nodetool cfstats
outputs). It will only give you the number that exist in sstables so to make it more accurate you can do a flush or try to estimate number in memtables from the MemtableColumnsCount
mbean
For a very basic ballpark number you can grab the estimated partition counts from system.size_estimates
across all the ranges listed (note that this is only number on one node). Multiply that out by number of nodes, then divided by RF.
To change the client timeout limit in Apache Cassandra, there are two techniques:
Technique 1:Modify the cqlshrc file.
Technique 2: Open the program cqlsh and modify the time specified using the client_timeout variable.
For details to accomplish please refer the link: https://playwithcassandra.wordpress.com/2015/11/05/cqlsh-increase-timeout-limit/
if you use cqlsh: open the script in editor and find all words "timeout". Change default value from 10 to 60 and save the script.
You can also increase timeout in the cqlsh command, e.g.:
cqlsh --request-timeout 120 myhost