Thursday, August 21, 2014

Happy To Announce My Another Book - DynamoDB Applied Design Patterns

About This Book
Create, design, and manage databases in DynamoDB
Immerse yourself in DynamoDB design examples and user cases, be it for new users or expert ones
Perform sharding and modeling, to give your applications the low cost NoSQL edge

Who This Book Is For
If you are an intermediate to advanced DynamoDB developer looking to learn the best practices associated with efficient data modeling, this book is for you.

In Detail
DynamoDB provides fast and predictable performance with seamless scalability. If you are a developer, you can use DynamoDB to create a database table that can store and retrieve any amount of data, and serve any level of request traffic. DynamoDB automatically spreads the data and traffic across multiple Availability Zones in a Region to provide high built-in availability and data durability. As a database administrator, you can create, scale up or down your request capacity for the DynamoDB table without downtime or performance degradation, and gain visibility into resource utilization and performance metrics.
Designed as a complete solutions guide for AWS DynamoDb, this book is a fully managed proprietary NoSQL database service pattern that is offered by as part of the Amazon Web Services portfolio. You will learn how to create, design, and manage databases in DynamoDB, using the AWS SDKs, APIs as well as the AWS Management Console, followed by designing a browser-based graphical user interface for interacting with the service.
The book will include a significant number of examples that can be used by new users and experts alike. The book begins with a description on the concepts of data model including tables, items, and attributes, primary key, indexes, and design patterns. You will learn to access DynamoDB in the Management Console, command line, and eclipse plugin. You will also gain insights into DynamoDB locals and CLI commands. Furthermore, global and local secondary indexes and its importance in DynamoDB will be looked into. You will then learn how to use Query and Scan operations in DynamoDB tables, along with DynamoDB APIs and their formats. The book ends by covering the best design use cases DynamoDB architectures along with real-time problem statements and their best solutions.
By the end of the book, you will have all that it takes to efficiently use DynamoDB to its utmost capabilities.

ISBN13  9781783551897
Paperback  179 pages

Even you can checkout my other titles on

Thursday, January 2, 2014

Cloud Computing as an Alternative to Super Computers

Cycle Computing Company, which provides technical solutions for high-performance computing (HPC), set a new record. The maximum power of a combined cluster which is based on eight geographically distant data centers by Amazon running on CycleServer exceeds 1.21 Tflops for eighteen hours. Each virtual machine had about 9.3 of a processor core. It seems that the performance of such a cluster could compete with the Japanese supercomputer Helios, it is at 20th place in the Top 500. However, the Executive Director of Cycle Computing Jason Stowe explains that now we are talking only about overcoming the barrier within the theoretical common performance of all data-centers of Amazon.  To demonstrate more than a quadrillion operations per second on a specialized test programs like Linpack, it requires high speed of data transmission and compute nodes which are physically close to each other. 

However, the purpose of a cluster is not to provide all available power for solving a single task, and ensuring high performance for thousands individual virtual machines, which do not require high-speed connections.
According to the classical evaluation methodology the main cluster of Amazon (EC2) takes "only" 127th place in the last edition of the list of 500 the fastest supercomputers in the world. It consists of 2 128 8-nuclear CPUs Xeon and demonstrates in Linpack 240 Tflops.
Even with such result cloud computing looks more attractive for most companies and scientific institutions service, such as Amazon Elastic Compute Cloud, than using supercomputer of any classical architecture.
To get an opportunity to download one of such machines with the help of their computing tasks is really difficult. You need to ground their importance and await your turn (it is about six months). Then you need to rewrite your code together with the programmers of the Supercomputer Institute and pay all working days, without reference to the actual amount of computing. Cloud platforms are available immediately; they are easily scaled and are much cheaper.
Mark Thompson (Professor of Chemistry) comments his scientific needs in the high-performance computing to the ArsTechnica magazine (his research team develops new coatings for solar cells in order to make alternative energy more efficient).
Computer simulation of the molecules’ properties of different compounds reduces the time and sum of expenses a lot; it filters out less suitable substances at an early stage without any need of physical work with them.
Thompson’s Group used Schrödinger software package and cloud computing platform. Defining the properties of 250 thousand potential candidates to the role of new cover for solar panel took a week and cost $ 33 000. This work would require 2.3 million hours using traditional approach. In other words, it would be simply impossible.
This post has been initiated by my dear friend Paul Smith who is an experienced writer, and also good at various things. Writing is his hobby. One can see his posts on popular sites and writing essays services. He likes to spend time in the open air.

Your Reviews/Queries Are Accepted