DynamoDb batchGetItem and Partition Key and Sort Key
A lot of questions you've asked, so I'll try and break it down. (Sorry I can't answer the question with php code snippets)
I tried to use batchGetItem to return attributes of more then one item from a table but seems it works only with the combination of the partition key and range key, but what if I want to identify the requested items only by primary key ? is the only way is to create the table without the range key ?
The BatchGetItem is the same as multiple GetItem calls. Essentially, retrieve Zero or One items with each GetItem call. You give it the unique key for the item you wish to retrieve (primary key). If your table has only Partition Key, then thats all you specify, otherwise Partition and Range key. BatchGetItem batches GetItem calls up in one request to DynamoDB.
If you wish to query for multiple items for a given Partition Key, you want to look at the Query API.
What are the advantages using Partition Key and Sort Key beside it stores all of the items with the same partition key value physically close together ?
This is a difficult question to answer, as it heavily depends on the unique key of your data model.
Some advantages that come to mind are: 1. Sort Keys enable you to sort the data on that attribute (in Ascending or Descending order) 2. Sort keys have more comparison operations (ie: Greater than, Less Than, Between, Begins with, etc). See docs
How to handle the request if I need more then 100 items ? just loop through all the items from the code and request each time 100 times or there is another way to achieve it via the AWS SDK DynamoDB?
If you request more than 100 items, BatchGetItem will return a ValidationException with the message "Too many items requested for the BatchGetItem call". You will need to loop through the items, 100 at a time to get all the items you need. Keep in mind, there is also a size limit of 16MB, which means if any items are unprocessed, they will be returned in the response under "UnprocessedItems".
If DynamoDB returns any unprocessed items, you should retry the batch operation on those items. However, we strongly recommend that you use an exponential backoff algorithm. If you retry the batch operation immediately, the underlying read or write requests can still fail due to throttling on the individual tables. If you delay the batch operation using exponential backoff, the individual requests in the batch are much more likely to succeed.
This documentation explains how to use it.