Excel Your KPIs with AI Copilot Start for free today
Your AI Copilot for Data
Definitive Guide to Decision Intelligence
Subscribe to our newsletter>
Get the latest products updates, community events and other news.
To run a process on a system, the old way is to run the command directly on command prompt. After system restart or process crashes, rerunning it requires user to get the original command and execute again. To cater this, we have init.d service. But human are greedy, what if we want the process to auto start even when it crashes, limiting the restart in certain times only? Logging the output? Automatically provide start/stop/restart/reload function? Here we have systemd.
Systemd is an init system and system manager that is widely becoming the new standard for Linux machines. In the old days if we want to configure a service that able to be start/stop, we need to custom made all parameters that to be taken and also action to be taken too while each of the parameters are passed.
In this blog post we are using Kyligence Analytics Platform (KAP) process as an example, we run KAP on the edge node of HDInsight in Azure to conduct enterprise version of Apache Kylin service. By running it by systemd, we do not need to handle process termination and restart by scripting
Be aware of that there are always two ways to run a process, foreground and background. What amazing about systemd is it will handle it automatically but we need to know how to configure it.
Here is the result when we kill the process on system by kill -9 , systemd will start it according to the configuration(restart on-failure after 60s)
Screen cap explaination:1. ps -ef | grep kap` to show the process is running2. kill -9 ` to kill it3. and, show the systemd statusEventually we see systemd has detected the process is failed.
Then checking the service by “systemctl status kap”
The screen cap shows the process has started automatically.
Ps –ef shows following:
Lets walk through the whole procedure of configuring a systemd service. To build a new service, we will need two things to be readied, #1. The running script and #2. The service file. For #1 it could be any execution command and it not even required to be long running, we are not covering everything here but just what we normally use. To know more about oneshot service, please checkout ：https://gist.github.com/drmalex07/d006f12914b21198ee43.First we need to know the behavior of the execution we are going to handle:
Screen cap here show the process will return after execution. So this is a background process to be handled. If it is a foreground process, it doesn’t return and will hold the command prompt.
After learning the behavior of the execution command, we move to #2 the service to be configured.Kap service file
here are detailed explaination
services involved with early boot or late system shutdown should disable “DefaultDependencies” in [Unit] section. Otherwise just “Description” is good enough.
Type: as mentioned before, our process will return and so we use “fork” as “Type”, if it is a long running process, we use “simple”Restart: “on-failure” covers all cases that suit our requirements to trigger a restart, here is the list of parameters could be used
RestartSec: Timewait to restart, this value cannot be too short otherwise the time to execute the command is not enough to change the state of serviceUser: Specifying which user to execute the commandExecStart, ExecStartPre, ExecStartPost, ExecReload, ExecStop, ExecStopPost:– ExecStart is the command to be ran when service is started– ExecStartPre is the command to be ran BEFORE the ExecStart command is ran– ExecStartPost is the command to be ran AFTER the ExecStart command is ran– ExecReload is the command that being triggered when “systemctl reload ”, this is used when process should not be start and stop when configuration is updated– ExecStop is the command to be ran to stop the process
The execution command is a bit complicated than normal one as in order to start kylin we need to export SPARK_HOME and KYLIN_HOME. For constant value we can use “Environment” in [Service] section but it doesn’t support online variable. So the whole ExecStart is divided into:a. Exporting KYLIN_HOME by listing directory name under /usr/local/kap/kap* which is the kap installation path with version number on the directory nameb. Putting KYLIN_HOME into SPARK_HOMEc. Run kylin start by KYLIN_HOME env variable
The same case applys to ExecStop
defining the run level of the service, setting WantedBy=multi-user.target means run level = 2,3,4 corresponding to /etc/init.d in the old days.About Run level:https://www.tldp.org/LDP/sag/html/run-levels-intro.htmlTo know more about systemd target:https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/System_Administrators_Guide/sect-Managing_Services_with_systemd-Targets.html
Learn about the fundamentals of a data product and how we help build better data products with real customer success stories.
Unlock potentials of analytics query accelerators for swift data processing and insights from cloud data lakes. Explore advanced features of Kyligence Zen.
Optimize data analytics with AWS S3. Leverage large language models and accelerate decision-making.
Optimize data analytics with Snowflake's Data Copilot. Leverage large language models and accelerate decision-making.
Discover the 7 top AI analytics tools! Learn about their pros, cons, and pricing, and choose the best one to transform your business.
Discover operational and executive SaaS metrics that matter for customers success, importance, and why you should track them with Kyligence Zen.
Unlock the future of augmented analytics with this must-read blog. Discover the top 5 tools that are reshaping the analytics landscape.
What website metrics matter in business? Learn about categories, vital website metrics, how to measure them, and how Kyligence simplifies it.
99 Almaden Boulevard Suite #663
San Jose, CA 95113
+1 (669) 256-3378
Ⓒ 2023 Kyligence, Inc. All rights reserved.
Already have an account? Click here to login
A complete product experience
A guided demo of the whole process, from data import, modeling to analysis, by our data experts.
Q&A session with industry experts
Our data experts will answer your questions about customized solutions.
Please fill in your contact information.We'll get back to you in 1-2 business days.