博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
Storm编程入门API系列之Storm的Topology默认Workers、默认executors和默认tasks数目
阅读量:6270 次
发布时间:2019-06-22

本文共 9223 字,大约阅读时间需要 30 分钟。

 

 

 

  关于,storm的启动我这里不多说了。

  见博客

 

 

 

   

 

   建立stormDemo项目

Group Id :  zhouls.bigdata

Artifact Id : stormDemo

Package : stormDemo

 

 

 

 

 

 

 

 

 

 

 

 

 

 

4.0.0
zhouls.bigdata
stormDemo
0.0.1-SNAPSHOT
jar
stormDemo
http://maven.apache.org
UTF-8
junit
junit
4.12
test
org.apache.storm
storm-core
1.0.2

 

 

 

 

 

 

   编写代码StormTopology.java

 以下是数字累加求和的例子

  spout一直产生从1开始的递增数字

  bolt进行汇总打印

 

package zhouls.bigdata.stormDemo;import java.util.Map;import org.apache.storm.Config;import org.apache.storm.LocalCluster;import org.apache.storm.StormSubmitter;import org.apache.storm.generated.AlreadyAliveException;import org.apache.storm.generated.AuthorizationException;import org.apache.storm.generated.InvalidTopologyException;import org.apache.storm.spout.SpoutOutputCollector;import org.apache.storm.task.OutputCollector;import org.apache.storm.task.TopologyContext;import org.apache.storm.topology.OutputFieldsDeclarer;import org.apache.storm.topology.TopologyBuilder;import org.apache.storm.topology.base.BaseRichBolt;import org.apache.storm.topology.base.BaseRichSpout;import org.apache.storm.tuple.Fields;import org.apache.storm.tuple.Tuple;import org.apache.storm.tuple.Values;import org.apache.storm.utils.Utils;public class StormTopology {        public static class MySpout extends BaseRichSpout{        private Map conf;        private TopologyContext context;        private SpoutOutputCollector collector;        public void open(Map conf, TopologyContext context,                SpoutOutputCollector collector) {            this.conf = conf;            this.collector = collector;            this.context = context;        }        int num = 0;         public void nextTuple() {            num++;            System.out.println("spout:"+num);            this.collector.emit(new Values(num));            Utils.sleep(1000);        }        public void declareOutputFields(OutputFieldsDeclarer declarer) {            declarer.declare(new Fields("num"));        }            }                public static class MyBolt extends BaseRichBolt{                private Map stormConf;        private TopologyContext context;        private OutputCollector collector;        public void prepare(Map stormConf, TopologyContext context,                OutputCollector collector) {            this.stormConf = stormConf;            this.context = context;            this.collector = collector;        }                int sum = 0;        public void execute(Tuple input) {            Integer num = input.getIntegerByField("num");            sum += num;            System.out.println("sum="+sum);        }        public void declareOutputFields(OutputFieldsDeclarer declarer) {                    }            }                public static void main(String[] args) {        TopologyBuilder topologyBuilder = new TopologyBuilder();        String spout_id = MySpout.class.getSimpleName();        String bolt_id = MyBolt.class.getSimpleName();                topologyBuilder.setSpout(spout_id, new MySpout());        topologyBuilder.setBolt(bolt_id, new MyBolt()).shuffleGrouping(spout_id);                        Config config = new Config();        String topology_name = StormTopology.class.getSimpleName();        if(args.length==0){            //在本地运行            LocalCluster localCluster = new LocalCluster();            localCluster.submitTopology(topology_name, config, topologyBuilder.createTopology());        }else{            //在集群运行            try {                StormSubmitter.submitTopology(topology_name, config, topologyBuilder.createTopology());            } catch (AlreadyAliveException e) {                e.printStackTrace();            } catch (InvalidTopologyException e) {                e.printStackTrace();            } catch (AuthorizationException e) {                e.printStackTrace();            }        }            }}

 

 

 

 

 

 

 

 

 

 

[hadoop@master apache-storm-1.0.2]$ pwd/home/hadoop/app/apache-storm-1.0.2[hadoop@master apache-storm-1.0.2]$ lltotal 204drwxrwxr-x  2 hadoop hadoop  4096 May 21 17:18 bin-rw-r--r--  1 hadoop hadoop 82317 Jul 27  2016 CHANGELOG.mddrwxrwxr-x  2 hadoop hadoop  4096 Jul 27 20:12 confdrwxrwxr-x  3 hadoop hadoop  4096 Jul 27  2016 examplesdrwxrwxr-x 17 hadoop hadoop  4096 May 21 17:18 externaldrwxrwxr-x  2 hadoop hadoop  4096 Jul 27  2016 extlibdrwxrwxr-x  2 hadoop hadoop  4096 Jul 27  2016 extlib-daemondrwxrwxr-x  2 hadoop hadoop  4096 May 21 17:18 lib-rw-r--r--  1 hadoop hadoop 32101 Jul 27  2016 LICENSEdrwxrwxr-x  2 hadoop hadoop  4096 May 21 17:18 log4j2drwxrwxr-x  2 hadoop hadoop  4096 May 21 19:05 logs-rw-r--r--  1 hadoop hadoop   981 Jul 27  2016 NOTICEdrwxrwxr-x  6 hadoop hadoop  4096 May 21 17:18 public-rw-r--r--  1 hadoop hadoop 15287 Jul 27  2016 README.markdown-rw-r--r--  1 hadoop hadoop     6 Jul 27  2016 RELEASE-rw-r--r--  1 hadoop hadoop 23774 Jul 27  2016 SECURITY.md[hadoop@master apache-storm-1.0.2]$ mkdir jar[hadoop@master apache-storm-1.0.2]$ cd jar/[hadoop@master jar]$ pwd/home/hadoop/app/apache-storm-1.0.2/jar[hadoop@master jar]$ lltotal 0[hadoop@master jar]$ rz[hadoop@master jar]$ lltotal 8-rw-r--r-- 1 hadoop hadoop 4869 Jul 27 22:17 StormTopology.jar[hadoop@master jar]$

 

 

 

  提交作业之前

 

 

 

 

 

 

 

 

 

 

 

 

 

[hadoop@master apache-storm-1.0.2]$ pwd/home/hadoop/app/apache-storm-1.0.2[hadoop@master apache-storm-1.0.2]$ lltotal 208drwxrwxr-x  2 hadoop hadoop  4096 May 21 17:18 bin-rw-r--r--  1 hadoop hadoop 82317 Jul 27  2016 CHANGELOG.mddrwxrwxr-x  2 hadoop hadoop  4096 Jul 27 20:12 confdrwxrwxr-x  3 hadoop hadoop  4096 Jul 27  2016 examplesdrwxrwxr-x 17 hadoop hadoop  4096 May 21 17:18 externaldrwxrwxr-x  2 hadoop hadoop  4096 Jul 27  2016 extlibdrwxrwxr-x  2 hadoop hadoop  4096 Jul 27  2016 extlib-daemondrwxrwxr-x  2 hadoop hadoop  4096 Jul 27 22:18 jardrwxrwxr-x  2 hadoop hadoop  4096 May 21 17:18 lib-rw-r--r--  1 hadoop hadoop 32101 Jul 27  2016 LICENSEdrwxrwxr-x  2 hadoop hadoop  4096 May 21 17:18 log4j2drwxrwxr-x  2 hadoop hadoop  4096 May 21 19:05 logs-rw-r--r--  1 hadoop hadoop   981 Jul 27  2016 NOTICEdrwxrwxr-x  6 hadoop hadoop  4096 May 21 17:18 public-rw-r--r--  1 hadoop hadoop 15287 Jul 27  2016 README.markdown-rw-r--r--  1 hadoop hadoop     6 Jul 27  2016 RELEASE-rw-r--r--  1 hadoop hadoop 23774 Jul 27  2016 SECURITY.md[hadoop@master apache-storm-1.0.2]$ bin/storm jar jar/StormTopology.jar zhouls.bigdata.stormDemo.StormTopology aaaRunning: /home/hadoop/app/jdk/bin/java -client -Ddaemon.name= -Dstorm.options= -Dstorm.home=/home/hadoop/app/apache-storm-1.0.2 -Dstorm.log.dir=/home/hadoop/app/apache-storm-1.0.2/logs -Djava.library.path=/usr/local/lib:/opt/local/lib:/usr/lib -Dstorm.conf.file= -cp /home/hadoop/app/apache-storm-1.0.2/lib/log4j-api-2.1.jar:/home/hadoop/app/apache-storm-1.0.2/lib/kryo-3.0.3.jar:/home/hadoop/app/apache-storm-1.0.2/lib/storm-rename-hack-1.0.2.jar:/home/hadoop/app/apache-storm-1.0.2/lib/log4j-core-2.1.jar:/home/hadoop/app/apache-storm-1.0.2/lib/slf4j-api-1.7.7.jar:/home/hadoop/app/apache-storm-1.0.2/lib/minlog-1.3.0.jar:/home/hadoop/app/apache-storm-1.0.2/lib/objenesis-2.1.jar:/home/hadoop/app/apache-storm-1.0.2/lib/clojure-1.7.0.jar:/home/hadoop/app/apache-storm-1.0.2/lib/servlet-api-2.5.jar:/home/hadoop/app/apache-storm-1.0.2/lib/log4j-slf4j-impl-2.1.jar:/home/hadoop/app/apache-storm-1.0.2/lib/log4j-over-slf4j-1.6.6.jar:/home/hadoop/app/apache-storm-1.0.2/lib/storm-core-1.0.2.jar:/home/hadoop/app/apache-storm-1.0.2/lib/disruptor-3.3.2.jar:/home/hadoop/app/apache-storm-1.0.2/lib/asm-5.0.3.jar:/home/hadoop/app/apache-storm-1.0.2/lib/reflectasm-1.10.1.jar:jar/StormTopology.jar:/home/hadoop/app/apache-storm-1.0.2/conf:/home/hadoop/app/apache-storm-1.0.2/bin -Dstorm.jar=jar/StormTopology.jar zhouls.bigdata.stormDemo.StormTopology aaa16503 [main] INFO  o.a.s.StormSubmitter - Generated ZooKeeper secret payload for MD5-digest: -5252258187769573644:-854003841657565436717093 [main] INFO  o.a.s.s.a.AuthUtils - Got AutoCreds []18654 [main] INFO  o.a.s.StormSubmitter - Uploading topology jar jar/StormTopology.jar to assigned location: /home/hadoop/data/storm/nimbus/inbox/stormjar-cf402e8a-abf7-46bc-a452-14b53aa6b25e.jar18939 [main] INFO  o.a.s.StormSubmitter - Successfully uploaded topology jar to assigned location: /home/hadoop/data/storm/nimbus/inbox/stormjar-cf402e8a-abf7-46bc-a452-14b53aa6b25e.jar18940 [main] INFO  o.a.s.StormSubmitter - Submitting topology StormTopology in distributed mode with conf {
"storm.zookeeper.topology.auth.scheme":"digest","storm.zookeeper.topology.auth.payload":"-5252258187769573644:-8540038416575654367"}23899 [main] INFO o.a.s.StormSubmitter - Finished submitting topology: StormTopology[hadoop@master apache-storm-1.0.2]$

 

 

 

   然后,查看storm 的ui界面

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

  为什么,会是如上的数字呢?大家要学,就要深入去学和理解。

 

   因为,默认Workers是1。所以是如上如图所示。

 

 

 

 

 

 

 

 

    点击进去

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

  由此可见,

  StormTopology默认是只有1个Worker、3个executors、3个tasks。

 

  

转载地址:http://cjppa.baihongyu.com/

你可能感兴趣的文章
新年第一镖
查看>>
unbtu使用笔记
查看>>
MaxCompute 学习计划(一)
查看>>
OEA 中 WPF 树型表格虚拟化设计方案
查看>>
Android程序开发初级教程(一) 开始 Hello Android
查看>>
使用Gradle打RPM包
查看>>
“我意识到”的意义
查看>>
淘宝天猫上新辅助工具-新品填表
查看>>
再学 GDI+[43]: 文本输出 - 获取已安装的字体列表
查看>>
nginx反向代理
查看>>
操作系统真实的虚拟内存是什么样的(一)
查看>>
hadoop、hbase、zookeeper集群搭建
查看>>
python中一切皆对象------类的基础(五)
查看>>
modprobe
查看>>
android中用ExpandableListView实现三级扩展列表
查看>>
%Error opening tftp://255.255.255.255/cisconet.cfg
查看>>
java读取excel、txt 文件内容,传到、显示到另一个页面的文本框里面。
查看>>
《从零开始学Swift》学习笔记(Day 51)——扩展构造函数
查看>>
python多线程队列安全
查看>>
[汇编语言学习笔记][第四章第一个程序的编写]
查看>>