multiple inputs on logstash jdbc

You can definitely have a single config with multiple jdbc input and then parametrize the index and document_type in your elasticsearch output depending on which table the event is coming from. input { jdbc { jdbc_driver_library => “/Users/logstash/mysql-connector-java-5.1.39-bin.jar” jdbc_driver_class => “com.mysql.jdbc.Driver” jdbc_connection_string => “jdbc:mysql://localhost:3306/database_name” jdbc_user => “root” jdbc_password => “password” schedule => “* * * … Read more

SPARK SQL – update MySql table using DataFrames and JDBC

It is not possible. As for now (Spark 1.6.0 / 2.2.0 SNAPSHOT) Spark DataFrameWriter supports only four writing modes: SaveMode.Overwrite: overwrite the existing data. SaveMode.Append: append the data. SaveMode.Ignore: ignore the operation (i.e. no-op). SaveMode.ErrorIfExists: default option, throw an exception at runtime. You can insert manually for example using mapPartitions (since you want an UPSERT … Read more

Spark Unable to find JDBC Driver

This person was having similar issue: http://apache-spark-user-list.1001560.n3.nabble.com/How-to-use-DataFrame-with-MySQL-td22178.html Have you updated your connector drivers to the most recent version? Also did you specify the driver class when you called load()? Map<String, String> options = new HashMap<String, String>(); options.put(“url”, “jdbc:mysql://localhost:3306/video_rcmd?user=root&password=123456”); options.put(“dbtable”, “video”); options.put(“driver”, “com.mysql.cj.jdbc.Driver”); //here DataFrame jdbcDF = sqlContext.load(“jdbc”, options); In spark/conf/spark-defaults.conf, you can also set spark.driver.extraClassPath … Read more