Saturday, December 17, 2016

7 Best Whiteboard Animation Software (2016) For Windows And Mac PC



Best and easy to use whiteboard animation software for Microsoft Windows and Mac OC X operating systems. This list of 7 best doodle, presentation, explainer and whiteboard animation programs contains both desktop whiteboard animation software and web based whiteboard animation services/software. 

7 Best Alternatives to VideoScribe (2016) For Windows And Mac PC

All whiteboard animation software except 1 included in this list also can be used as a best alternative to VideoScribe. If you are looking for cheaper and affordable VideoScribe alternatives. All VideoScribe alternates are no less than VideoScribe in any way. 

The list of best whiteboard animation software and best VideoScribe alternatives contains

1)-Explaindio Video Creator (Best available whiteboard animation software for Windows and Mac OS X) (Great alternative to VideoScribe cheaper than VideoScribe)

2)-VideoScribe (Best available whiteboard animation software for Windows and Mac OS X) 

3)-Easy Sketch Pro (Best available whiteboard animation software for Windows and Mac OS X) (Great alternative to VideoScribe cheaper than VideoScribe)

4)-TTS Sketch Maker (Best available whiteboard animation software for Windows and Mac OS X) (Great alternative to VideoScribe cheaper than VideoScribe)

5)-VideoMakerFX (Best available whiteboard animation software for Windows and Mac OS X) (Great alternative to VideoScribe cheaper than VideoScribe)

6)-GoAnimate (Best available whiteboard animation software for Windows and Mac OS X) (Great alternative to VideoScribe not cheaper than VideoScribe)

7)-Rawshorts.com (Best available whiteboard animation software for Windows and Mac OS X) (Great alternative to VideoScribe not cheaper than VideoScribe)


The list of 7 best doodle maker, presentation explainer and animation video makers for Mac and Windows computer/laptop.

I hope this list of 7 best and available scribe, doodle and whiteboard animation software will end your search and you will find it informative and helping. Please do not forget to let me know If you use or know about any other good and available whiteboard animation software. You can use the comment section to give your feedback and suggestions. 

Stay tuned, Stay blessed and Do not forget to comment, subscribe, like and share. 

7 Best Whiteboard Animation Software (2016) For Windows And Mac PC

7 Best Alternatives to VideoScribe (2016) For Windows And Mac PC

Saturday, September 3, 2016

HIVE 1.2.0 INSTALLATION ON HADOOP 2.7.1 SINGLE NODE CLUSTER



PREREQUISITES
1) Ubuntu 12.4 or higher
2) Sun Java 6 or higher
3) The system must have hadoop installed (hadoop version 2.7.1)and configured.
Note :- Hive can run in  Linux and Windows environment. Mac (OS X ) is a commonly used development environment. 
INSTALLATION STEPS
Step1 :sudo tar -zxzf apache-hive-1.2.1-bin.tar.gz
Step2 :sudo mv apache-hive-1.2.0-bin  /usr/local/hive
change this ~/.bashrc in hadoop user 
login in hadoop user using this command
su hadoop-user(here your user name as per your system)
then it will ask for password write it the password and after do further process. 
Step3 :sudo nano ~/.bashrc
Step4: add this in ~/.bashrc file and save
export HIVE_HOME=/usr/local/hive/apache-hive-1.2.0-bin/
export HIVE_CONF_DIR=$HIVE_HOME/conf
export HIVE_CLASS_PATH=$HIVE_CONF_DIR
export PATH=$HIVE_HOME/bin:$PATH
Step5 : source ~/.bashrc
Step6 :hive
Then Start first hadoop using start-all.sh command then start hive just typing hive command and you can see like this

hadoop-user@ubuntu:~$ hive

Logging initialized using configuration in jar:file:/usr/local/hive/apache-hive-1.2.1-bin/lib/hive-common-1.2.1.jar!/hive-log4j.properties
Java HotSpot(TM) Server VM warning: You have loaded library /usr/local/hadoop-2.7.1/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.


hive> 


Step By Step Mapreduce Example run on hadoop 2.7.1 with video




Step 1: open terminal using ctrl+alt+T

Step 2: go to hadoop user using
hadoopashish@ubuntu:su hadoop-user

Step 3: Start all Hadoop using below command
hadoop-user@ubantu : start-all.sh

Step 4: Make new Data Directory on Desktop and create on text file in side Data folder which is on desktop
Desktop/data/text.txt

Step 5:After start hadoop just type below text
hadoop-user@ubantu : jps
6385 ResourceManager
6146 SecondaryNameNode
5637 NameNode
6696 NodeManager
5866 DataNode
12510 Jps

Step 6:after doing this You have to go on below path
hadoop-user@ubantu : cd   /usr/local/hadoop/
or which path inside your Directory Mine is
hadoop-user@ubantu : cd   /usr/local/hadoop-2.7.1/

Step 7 :after going in this folder you have to type
hadoop-user@ubantu : cd /usr/local/hadoop-2.7.1 $ bin/hdfs dfs -mkdir /user
after this press enter
hadoop-user@ubantu : cd /usr/local/hadoop-2.7.1 $ bin/hdfs dfs -mkdir /user/ashish/
after this press enter

Step 8 : next step to open web browser to veryfying this user will be created or not so just open web browser and type in navigation bar
localhost:50070/
then press enter.You are seeing hadoop window inside that you have to go utilities-->Browse the file System, In side that you can see Browse Directory list.just click on GO Button and You can see that below user directory then select that user directory u can see another directory which one we have created ahead,In my case i see ashish directory name I am seeing.
then verification is done.

Step 9: now we are give the input which is stored on desktop/data/text.txt
hadoop-user@ubuntu:/usr/local/hadoop-2.7.1$ hdfs dfs -put /home/hadoopashish/Desktop/data input
so here hadoop system is getting input directory file

Step 10: now this is main step to run map-reduce.
Here first you have to type hadoop which is internal command.
the jar which is java archive file extensionw which will be run our hadoop-mapreduce-example-2.7.1.jar
the wordcount which is program name wordcount.java which is inbuilt store inside hadoop-mapreduce-example-2.7.1.jar file
then our input file name input and output which is going to stored output of next after running this command.
hadoop-user@ubuntu:/usr/local/hadoop-2.7.1$ hadoop  jar  share/hadoop/mapreduce/hadoop-mapreduce-example-2.7.1.jar  wordcount  input  output

After putting this step just press enter and your mapreduce job is going to run and u can see that process in terminal window

Step 11: after completing this step u have to go once again in the browser and open localhost:50070 then go in utilities à browse the file system inside that click on GO button and go inside user the inside user goes in hadoop-user (My Directory where stored file)
You can see too file input and output.
You can see there r-0000 file and success file in output folder.open r-0000 file yes this is our output file.
Open it and u can see the output of this file




Friday, September 2, 2016

Hadoop 2.7.0 Single Node Cluster Setup on Ubuntu 15.04




$ sudo apt-get update

$ sudo apt-get install default-jdk

$ java -version

$ sudo apt-get install ssh

$ sudo apt-get install rsync

$ ssh-keygen -t dsa -P ' ' -f ~/.ssh/id_dsa

$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

$ wget -c http://apache.mirrors.lucidnetworks.net/hadoop/common/hadoop-2.7.0/hadoop-2.7.0.tar.gz

$ sudo tar -zxvf hadoop-2.7.0.tar.gz

$ sudo mv hadoop /usr/local/hadoop

$ update-alternatives --config java

$ sudo nano ~/.bashrc

          #Hadoop Variables
          export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64
          export HADOOP_HOME=/usr/local/hadoop
          export PATH=$PATH:$HADOOP_HOME/bin
          export PATH=$PATH:$HADOOP_HOME/sbin
          export HADOOP_MAPRED_HOME=$HADOOP_HOME
          export HADOOP_COMMON_HOME=$HADOOP_HOME
          export HADOOP_HDFS_HOME=$HADOOP_HOME
          export YARN_HOME=$HADOOP_HOME
          export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
          export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"

$ source ~/.bashrc

$ cd /usr/local/hadoop/etc/hadoop

$ sudo nano hadoop-env.sh

          #The java implementation to use.
          export JAVA_HOME="/usr/lib/jvm/java-7-openjdk-amd64"

$ sudo nano core-site.xml

          <configuration>
                  <property>
                      <name>fs.defaultFS</name>
                      <value>hdfs://localhost:9000</value>
                  </property>
          </configuration>

$ sudo nano yarn-site.xml

          <configuration>
                  <property>
                      <name>yarn.nodemanager.aux-services</name>
                      <value>mapreduce_shuffle</value>
                  <property>
                  <property>
                      <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
                      <value> org.apache.hadoop.mapred.ShuffleHandler</value>
                  </property>
          </configuration>

$ sudo cp mapred.site.xml.template mapred-site.xml

$ sudo nano mapred-site.xml

          <configuration>
                  <property>
                      <name>mapreduce.framework.name</name>
                      <value>yarn</value>
                  </property>
          </configuration>

$ sudo nano hdfs-site.xml

          <configuration>
                  <property>
                      <name>dfs.replication</name>
                      <value>1</value>
                  </property>
                  <property>
                      <name>dfs.namenode.name.dir</name>
                      <value>file:/usr/local/hadoop/hadoop_data/hdfs/namenode</value>
                  </property>
                  <property>
                      <name>dfs.datanode.data.dir</name>
                      <value>file:/usr/local/hadoop/hadoop_store/hdfs/datanode</value>
                  </property>
          </configuration>

$ cd
$ sudo apt-get update

$ sudo apt-get install default-jdk

$ java -version

$ sudo apt-get install ssh

$ sudo apt-get install rsync

$ ssh-keygen -t dsa -P ' ' -f ~/.ssh/id_dsa

$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

$ wget -c http://apache.mirrors.lucidnetworks.net/hadoop/common/hadoop-2.7.0/hadoop-2.7.0.tar.gz

$ sudo tar -zxvf hadoop-2.7.0.tar.gz

$ sudo mv hadoop /usr/local/hadoop

$ update-alternatives --config java

$ sudo nano ~/.bashrc

#Hadoop Variables
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64
export HADOOP_HOME=/usr/local/hadoop
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"

$ source ~/.bashrc

$ cd /usr/local/hadoop/etc/hadoop

$ sudo nano hadoop-env.sh

#The java implementation to use.
export JAVA_HOME="/usr/lib/jvm/java-7-openjdk-amd64"

$ sudo nano core-site.xml



fs.defaultFS
hdfs://localhost:9000



$ sudo nano yarn-site.xml



yarn.nodemanager.aux-services
mapreduce_shuffle


yarn.nodemanager.aux-services.mapreduce.shuffle.class
org.apache.hadoop.mapred.ShuffleHandler



$ sudo cp mapred.site.xml.template mapred-site.xml

$ sudo nano mapred-site.xml



mapreduce.framework.name
yarn



$ sudo nano hdfs-site.xml



dfs.replication
1


dfs.namenode.name.dir
file:/usr/local/hadoop/hadoop_data/hdfs/namenode


dfs.datanode.data.dir
file:/usr/local/hadoop/hadoop_store/hdfs/datanode



$ cd

$ mkdir -p /usr/local/hadoop/hadoop_data/hdfs/namenode

$ mkdir -p /usr/local/hadoop/hadoop_data/hdfs/datanode

$ sudo chown chaal:chaal -R /usr/local/hadoop

$ hdfs namenode -format

$ start-all.sh

$ jps



http://192.168.56.10:8088/
http://192.168.56.10:50070/
$ mkdir -p /usr/local/hadoop/hadoop_data/hdfs/namenode

$ mkdir -p /usr/local/hadoop/hadoop_data/hdfs/datanode

$ sudo chown chaal:chaal -R /usr/local/hadoop

$ hdfs namenode -format

$ start-all.sh

$ jps



http://192.168.56.10:8088/
http://192.168.56.10:50070/

Wednesday, June 22, 2016

10 of the most popular Big Data tools for developers

The market is full of tools for developers but CBR has compiled a list of the best.
1. Splice Machine
This is a real-time SQL-on-Hadoop database which can help you to derive real-time actionable insights, which is a clear benefit for those who are aiming for quick development.

This tool offers the ability to utilise standard SQL and can scale out on commodity hardware, this is a tool for developers that have found that MySQL and Oracle can't scale to their desired limits. 

It is SQL 99 compliant with standard ANSI SQL and can scale from gigabytes to petabytes. 

As well as support for .NET, Java and Python, it also offers support for those written in JavaScript/AngularJS.
2. MarkLogic
MarkLogic is built to deal with heavy data loads and allow users to access it through real-time updates and alerts. 

It provides geographical data that is combined with content and location relevance along with data filtering tools. This tool is ideal for those looking at paid content search app development. 

It supports flexible API's such as Node.js Client API, NoSQL and it also offers Samplestack to help show developers how to implement a reference architecture using key MarkLogic concepts and sample code.
3. Google Charts
Google were bound to be in this list, the search engine giant has fingers in many pies and app developer tools is another area where the company has a strong offering. 

This free tool comes with various capabilities for visualising data from a website such as hierarchical tree maps or just simple charts. 

This tool is easily implemented by embedding JavaScript code on a website and allows you to sort, modify and filter data as well as the ability to connect to a database or pull data from a website. 

Offering support for popular languages and with the security of knowing that Google will likely keep on improving its offering, this is a good option for many standard developers.
4. SAP inMemory
SAP's HANA platform offers a number of advantages over some of the competition, such as the ability to integrate and analyse large workloads of data to be analysed in real time. This is extremely beneficial for the developer who is looking for speed to market. 

Yes HANA is a platform but it can also be combined with Apache Hadoop and has a number of tools for application development and infinite storage. 

Users have the choice between Eclipse and Web based tools which allows for a more collaborative model of development.
5. Cambridge semantics
Using the Anzo Software Suite, this open platform helps you to collect, integrate and analyse Big Data to help you build Unified Access solutions. 

The software has a data integration machine that streamlines data collection and assists with analytics. 

The key features include being able to combine data from multiple sources and customised dashboards to make analysis easy.
6. MongoDB
This is an open-source documental database that is ideal for developers who want to have precise control over the final results. 
This comes with full index support and the flexibility to index any attribute and scale horizontally without affecting functionality. The document-based queries and GridFS for storing files mean that you shouldn't have issues with compromising your stack. 

MongoDB is also scalable and includes third party log tools such as Edda and Fluentd.
7. Pentaho
Pentaho joins data integration and business analytics for visualising, analysing and blending Big Data. 

The open and embeddable platform comes with extensive analytics capabilities with data mining and predictive analysis. 

This is another option that is well supported by an active community of developers and also has a heavy focus on being easy to use with a recently updated UI. 

The connectivity to any type of data source or source of data with native support for Hadoop, NoSQL and analytic databases. The data integration tools mean that users do not require coding in SQL or writing MapReduce Java functions.

8. Talend
Straight away, one of the key benefits of Talend's Open Studio is that it is open source, which means that improvements will keep on rolling out as the community tweaks the tool. 

Its tools include products for developing, testing and deploying data management and application integration products. Additionally the company manages the full lifecycle, even across enterprise boundaries.
9. Tableau 
Tableau is one of the more well- known names in the data visualisation sphere but it offers many tools for developers that is supported by an active community. 

Some of the key features of this software are its in-memory analytics database and advanced query language. API, XML, User Scripts, Python, and JavaScript are all supported and so are a number of browser extensions. 

Simplifying development to reach even the new developers is what this tool is designed for.
10. Splunk
Splunk, specialises typically in harnessing machine data created from a number of different sources, such as websites, applications and sensors. The company also enables developers to write code using any technology platform, language or framework. 

Extension tools have been developed for Visual Studio for .NET developers to build applications and uses the Splunk SDK for C#. 

A plug-in for Eclipse contains a template for building Splunk SDK for Java applications and the company also provides logging libraries to help log activity from .NET or Java application.