forked from alien4cloud/alien4cloud.github.io
-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathsearch.json
More file actions
1 lines (1 loc) · 752 KB
/
search.json
File metadata and controls
1 lines (1 loc) · 752 KB
1
{"entries":[{"title":"1.4.0 About","baseurl":"","url":"/documentation/1.4.0/about.html","date":null,"categories":[],"body":"Alien4Cloud means Application LIfecycle ENablement for Cloud. It is a project started by FastConnect in order to help enterprises adopting the cloud for their new or even existing applications in an Open way meaning with Open-Source model and standardization support in mind. Why Cloud Computing is becoming prime development and deployment model for a number of applications. New applications being developed want to benefit from the agility and sometimes cost reduction implied by usage of Cloud technologies. Existing applications want to benefit as well from this model to be able to allow development and operations teams, managing the applications, to accelerate new features or maintenance pace. This requires to implement agility principles and leverage proper tools, not only in development, but as well on the deployment phase along all the application lifecycle. As well agility is reached when proper collaboration between Dev and Ops teams, and their business sponsor, is achieved. Even if a large number of solutions exist in the Cloud ecosystem, the ecosystem is not consolidated. Architectures, APIs, technologies and infrastructures are still evolving a lot. It leaves a lot of choice to the one willing to develop and deploy applications in the Cloud, but very often, the will to reach agility creates a lock-in to the choosen provider at some level : SaaS, PaaS or IaaS. Knowing the investment term in the Applications from development to deployment (usually several years), and the legacy, it is important to protect the investment in the Application lifecycle, from moving parts, at any level possible. What Alien4Cloud aims at addressing some of these problems by providing the following capabilities : Ease the design and portability of Applications by leveraging TOSCA (an emerging standard driven by OASIS foundation) Isolate the application evolution from deployment technologies and infrastructures, allowing to integrate with any deployment layer and infrastructure Accelerate Application Infrastructure Design and improve reusability by providing a Components and Blueprints catalog Ease collaboration between Development and Deployment teams across the Application lifecycle in creating the Components and Blueprints to fill the catalog Integrate with existing Enterprise systems (Dev and Ops) through REST API and pluggable strategies Check current roadmap for details on where we are and where we go. Standard support Alien4Cloud supports OASIS TOSCA an emerging standard addressing applications portability in the cloud. We believe that applications cloud enablement should be done in an open way, free of any lock-in. No lock-in meaning that the application should freely move from one environment to the other with smallest effort possible. Therefore, it needs to abstract itself from the underlying infrastructure technical adherence, and define its infrastructure requirements and architecture, independently from each Cloud Provider’s Infrastructure Catalog. If not done, even if, technical compability between vendors could exist in theory (yet to be confirmed by reality), Infrastructure Catalog alignment between providers is very unlikely to happen as each provider is focusing in delivering best value to its customers and does not spend time aligning with others, especially when they may be competitors. As an analogy, can you easily compare your Telecom providers offerings ? We bet that the same will happen with Cloud providers, and it has already started. TOSCA enables the expression of Application Requirements on the Infrastructure and its QOS/SLA, in an open way, opening the door to optimized placement of Applications in the Cloud Infrastructures based on customer choice at its heart. We know about Infrastructure As Code, with TOSCA, we enter in to the era of “Application Requirements as Code” easing Application lifecycle management across several Cloud infrastructures. By increasing service and application portability in a vendor-neutral ecosystem, TOSCA will enable : Portable deployment to any compliant cloud Smoother migration of existing applications to the cloud Flexible bursting (consumer choice) Dynamic, multi-cloud provider applications Note: TOSCA Simple profile is a working draft and is not released yet to public. Current Alien 4 Cloud version is using a Alien 4 Cloud’s specific DSL that is really close to the latest TOSCA Simple Profile in YAML TC work. Open-Source We decided to build Alien4Cloud and give it to the community in order to allow Application Requirements modelling in a TOSCA format, in a collaborative way, between all participants involved in Application Infrastructure Requirements definition. It is provided with an Apache 2 license in order to favour contributions from external teams or individuals. Please check our Contribute page to see how you can help. What it is not Alien4Cloud focuses on Design, Collaboration, Application Lifecycle Management and later Governance, but leverages other existing open source projects that help orchestrating cloud applications and which focus on runtime aspects such as Cloudify . Alien4Cloud does not aim to provide applications deployment runtime. We believe that there are already a number of viable options there (some of them not being TOSCA compliant, btw) and we want to integrate more than replace. We do it in an open way through plug-in approach to allow you to leverage your best tools or skills. Status 1.4.2 is our latest version. If you wish to start a POC you can consider the 2.0.0 sprint milestones that will let you leverage the latests developed features. Which version to choose ? Basically the question depends on your timeframe, on the features you are looking from and on the support level you need. 1.4.2 is our most stable version and is the latest version that we support. 2.0.0 is still in development and things can change if you start using it. On the other hand all new features are developed in 2.0.0 so you may get more by choosing to start working with this version. We especially recommend that new POCs or project that will really start after we released the 2.0.0 (check our roadmap ). Supported platforms To get more informations about the supported platforms, please refer to this section . Features "},{"title":"Administration","baseurl":"","url":"/documentation/1.4.0/user_guide/admin.html","date":null,"categories":[],"body":"Administration section is available to any ADMIN user of the platform. It allows configuration of global elements of the platform including Users Plugins Orchestrators and locations (deployment target) Server state allows to view the server state and metrics as well as turn on maintenance mode. Audit Meta-properties "},{"title":"Server state","baseurl":"","url":"/documentation/1.4.0/user_guide/admin_server_state.html","date":null,"categories":[],"body":"The server state page allows to an admin to get metrics on the current state of the alien4cloud server. In addition to metrics visualizations (as garbage collection statistics and API response metrics) this page is also where an admin can enable alien4cloud maintenance mode. Maintenance mode allows to block the REST API of alien4cloud to avoid any user operations to be triggered. Note that some internal process within the server may still be active like event fetching from orchestrators etc. Switching maintenance mode allows to display a maintenance state page to your users with the progress and messages that you may want to dispatch to them on the current state of the server maintenance. "},{"title":"Advanced configuration","baseurl":"","url":"/documentation/1.4.0/admin_guide/advanced_configuration.html","date":null,"categories":[],"body":" Using SSL see security section . Elastic Search configuration ALIEN 4 Cloud uses ElasticSearch as it’s data store and indexing service. By default ALIEN 4 Cloud starts up an embedded ElasticSearch node. Of course when running in production it is recommended to use a remote cluster (ideally with high availability configured). Common configuration Common configuration allows you to configure the name of the elasticsearch cluster ( clusterName ), as well as the prefix_max_expansions (performance setting used for prefix queries). We recommend that you don’t change the default prefix_max_expansions value. If you wish to change one of the parameters, you should open the alien4cloud-config.yml file and go to the elasticSearch configuration section. elasticSearch : clusterName : escluster local : false client : false resetData : false prefix_max_expansions : 10 local and resetData should be left to false. Configure the embedded Elastic Search The embedded Elastic Search configuration elasticsearch.yml is a native elastic search configuration and you can find plenty of information on elastic search website on how you can configure it. However the main element you may wish to configure is elastic search storage directories: path : data : ${user.home}/.alien/elasticsearch/data work : ${user.home}/.alien/elasticsearch/work logs : ${user.home}/.alien/elasticsearch/logs Configure a remote Elastic Search (throw a no data node) In order to configure a remote Elastic Search, you should edit the following: In alien4cloud-config.yml file, edit the elasticSearch section and change client from false to true: elasticSearch : clusterName : escluster local : false client : true resetData : false prefix_max_expansions : 10 In the elasticsearch.yml make sure that the connection parameters matches the ones of your elasticsearch cluster. Example: discovery.zen.ping.multicast.enabled : false discovery.zen.ping.unicast.enabled : true discovery.zen.ping.unicast.hosts : 129.185.67.112 In this mode, a ‘client’ node is initialized and joins the cluster. It doesn’t store any data and act as a proxy. The machines must be visible for each other (in other words, they should be into the same network). Configure a remote Elastic Search (using a standalone transport client) In this mode, we use a simple standalone client that can be in another network as long as the cluster can be reachable. In alien4cloud-config.yml file, edit the elasticSearch section and set ‘client’ and ‘transportClient’ to true, and indicate the cluster host and port: elasticSearch : clusterName : escluster local : false client : true transportClient : true # a comma separated list of host:port couples hosts : 129.185.67.112:9300 resetData : false prefix_max_expansions : 10 In the elasticsearch.yml make sure that the cluster name is well defined (should be the same than the cluster). cluster.name : escluster Configure a remote Elastic Search with replication In this mode, the Elastic Search cluster has more than one node (cluster with replication). Assuming we have a cluster of two nodes: In the alien4cloud-config.yml file, edit the elasticSearch section and add all hosts in your cluster. # a comma separated list of host:port couples hosts : <host_1_ip>:<port_1>,<host_2_ip>:<port_2> In the elasticsearch.yml make sure to set the proper number of replicats and the hosts in the cluster . Assuming we are on the host_1 Configuration: # Set the number of shards: index.number_of_shards : 1 # Set the number of replicas: index.number_of_replicas : 1 # 2. Configure an initial list of master nodes in the cluster # to perform discovery when new nodes (master or data) are started: discovery.zen.ping.unicast.hosts : [ \"localhost\" , <host_2_ip> ] Directories configuration ALIEN 4 Cloud store various files on the hard drive. Cloud Service archives, Artifacts overriden in the topologies, plugins archives etc. Directories can be configured in the alien4cloud-config.yml file. By default, ALIEN 4 Cloud stores data in the user home directory in a .alien folder. # Configuration of Alien 4 Cloud's CSAR repository, temporary folder and upload settings. directories : # Alien 4 cloud main directory (other directories are relative path to this one) alien : ${user.home}/.alien # directory in which alien 4 cloud stores Cloud Service Archives csar_repository : csar # directory in which alien 4 cloud stores uploaded artifacts (war etc.). artifact_repository : artifacts # temporary directory for alien 4 cloud upload_temp : upload # directory in which alien 4 cloud unzips loaded plugins. plugins : plugins Admin user initialization In case there is no admin user in it’s repository, ALIEN 4 Cloud can automatically create a user with ADMIN rights. The user name and password are configured in the alien4cloud-config.yml file. Of course if an ADMIN user already exists in ALIEN then no user is created and this section is ignored. # Configuration of default admin ensurer, if true it creates a default admin user if no admin can be found in the system. users : admin : # Alien 4 cloud checks that an admin user is defined at the application launch. ensure : true username : admin password : admin email : admin@mycompany.com LDAP configuration See specific sub-section . Component search boost ALIEN 4 Cloud is managing a custom way to rank components when searching for them. In order to compute the boost for a component we get the number of topologies that uses the component and multiply it by the usage factor. Then, if a component is the latest version we add a fixed version boost, finally if a component is marked as default for at least one of it’s capability, we add another default fixed boost. In order to change the default weights you can edit the following configuration: # configure the boost factors for tosca elements in the search, elements with the highest boost factor appears first in search results # the total boost factor for a component is the sum of the following boost factors. components.search.boost : # boost components that are used in topologies by (number of active topologies that uses the component * usage) usage : 1 # components that exist in latest version get a boost factor regarding other components. Note that this factor should be very high as every component # with latest version will be boosted. version : 1000 # components that are configured as default for at least 1 capability get the following a boost factor. default : 10 JVMs tunning You might want to tune up your JVMs for a better performance in production. Here are some tested JVM options that we recommend to you. Please, make sure to customize the different paths in the examples below according to your installation. ElasticSearch JVM -Xms2g -Xmx2g -Djava.awt.headless = true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction = 75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -XX:+DisableExplicitGC -Dfile.encoding = UTF-8 -XX:+PrintGCDateStamps -XX:ThreadStackSize = 256k -XX:+ScavengeBeforeFullGC -XX:+CMSScavengeBeforeRemark Alien4Cloud JVM -server -showversion -XX:+AggressiveOpts -Xmx2g -Xms2g -XX:MaxPermSize = 512m -XX:+HeapDumpOnOutOfMemoryError -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction = 75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+ScavengeBeforeFullGC -XX:+CMSScavengeBeforeRemark -XX:+DisableExplicitGC "},{"title":"Ansible support","baseurl":"","url":"/documentation/1.4.0/orchestrators/cloudify4_driver/ansible_support.html","date":null,"categories":[],"body":" Premium feature This section refers to a premium feature. Ansible support for cloudify 4 orchestrator needs Ansible to be installed on the manager. You need to install version 2.0.1.0 of ansible: sudo yum install python-cffi sudo yum install gcc sudo yum install python-devel sudo yum install openssl-devel sudo pip install ansible==2.0.1.0 sudo pip install --upgrade setuptools Change this config in ansible configuration file: [defaults] callback_whitelist = tree Or simply add a cfg file: sudo bash -c \"mkdir /etc/ansible && echo '[defaults]' > /etc/ansible/ansible.cfg && echo 'callback_whitelist = tree' >> /etc/ansible/ansible.cfg\" For the moment we need to hack the tree plugin. Here is the patch: @@ -19,6 +19,7 @@ __metaclass__ = type import os +import json from ansible.plugins.callback import CallbackBase from ansible.utils.path import makedirs_safe @@ -39,26 +40,28 @@ def __init__(self): super(CallbackModule, self).__init__() - self.tree = TREE_DIR + self.tree = os.environ['TREE_DIR'] if not self.tree: self.tree = os.path.expanduser(\"~/.ansible/tree\") self._display.warning(\"The tree callback is defaulting to ~/.ansible/tree, as an invalid directory was provided: %s\" % self.tree) - def write_tree_file(self, hostname, buf): + def write_tree_file(self, hostname, name, buf): ''' write something into treedir/hostname ''' - buf = to_bytes(buf) + buf = {'task_name': \"{}\".format(name), 'result': json.loads(buf)} + #buf = to_bytes(buf) + buf = json.dumps(buf) try: makedirs_safe(self.tree) path = os.path.join(self.tree, hostname) - with open(path, 'wb+') as fd: - fd.write(buf) + with open(path, 'ab+') as fd: + fd.write(\"{}\\n\".format(buf)) except (OSError, IOError) as e: self._display.warning(\"Unable to write to %s's file: %s\" % (hostname, str(e))) def result_to_tree(self, result): if self.tree: - self.write_tree_file(result._host.get_name(), self._dump_results(result._result)) + self.write_tree_file(result._host.get_name(), result._task, self._dump_results(result._result)) def v2_runner_on_ok(self, result): self.result_to_tree(result) Paste this into a patch file, eg. tree.py.patch and patch the original file: sudo yum install patch cp /usr/lib/python2.7/site-packages/ansible/plugins/callback/tree.py tree.py.back sudo patch /usr/lib/python2.7/site-packages/ansible/plugins/callback/tree.py tree.py.patch You will need to install specific packages if you want to use some extra ansible modules. For example, in order to use ec2 module, you will need to install boto : sudo pip install boto "},{"title":"Ansible support","baseurl":"","url":"/documentation/1.4.0/orchestrators/cloudify3_driver/ansible_support.html","date":null,"categories":[],"body":" Premium feature This section refers to a premium feature. Ansible support for cloudify 3 orchestrator needs Ansible to be installed on the manager. You need to install version 2.0.1.0 of ansible: sudo yum install python-cffi sudo yum install gcc sudo yum install python-devel sudo yum install openssl-devel sudo pip install ansible==2.0.1.0 sudo pip install --upgrade setuptools Change this config in ansible configuration file: [defaults] callback_whitelist = tree Or simply add a cfg file: sudo bash -c \"mkdir /etc/ansible && echo '[defaults]' > /etc/ansible/ansible.cfg && echo 'callback_whitelist = tree' >> /etc/ansible/ansible.cfg\" For the moment we need to hack the tree plugin. Here is the patch: @@ -19,6 +19,7 @@ __metaclass__ = type import os +import json from ansible.plugins.callback import CallbackBase from ansible.utils.path import makedirs_safe @@ -39,26 +40,28 @@ def __init__(self): super(CallbackModule, self).__init__() - self.tree = TREE_DIR + self.tree = os.environ['TREE_DIR'] if not self.tree: self.tree = os.path.expanduser(\"~/.ansible/tree\") self._display.warning(\"The tree callback is defaulting to ~/.ansible/tree, as an invalid directory was provided: %s\" % self.tree) - def write_tree_file(self, hostname, buf): + def write_tree_file(self, hostname, name, buf): ''' write something into treedir/hostname ''' - buf = to_bytes(buf) + buf = {'task_name': \"{}\".format(name), 'result': json.loads(buf)} + #buf = to_bytes(buf) + buf = json.dumps(buf) try: makedirs_safe(self.tree) path = os.path.join(self.tree, hostname) - with open(path, 'wb+') as fd: - fd.write(buf) + with open(path, 'ab+') as fd: + fd.write(\"{}\\n\".format(buf)) except (OSError, IOError) as e: self._display.warning(\"Unable to write to %s's file: %s\" % (hostname, str(e))) def result_to_tree(self, result): if self.tree: - self.write_tree_file(result._host.get_name(), self._dump_results(result._result)) + self.write_tree_file(result._host.get_name(), result._task, self._dump_results(result._result)) def v2_runner_on_ok(self, result): self.result_to_tree(result) Paste this into a patch file, eg. tree.py.patch and patch the original file: sudo yum install patch cp /usr/lib/python2.7/site-packages/ansible/plugins/callback/tree.py tree.py.back sudo patch /usr/lib/python2.7/site-packages/ansible/plugins/callback/tree.py tree.py.patch You will need to install specific packages if you want to use some extra ansible modules. For example, in order to use ec2 module, you will need to install boto : sudo pip install boto "},{"title":"Create a new application","baseurl":"","url":"/documentation/1.4.0/user_guide/application_creation.html","date":null,"categories":[],"body":"Creation of a new application requires the APPLICATIONS_MANAGER or ADMIN global rôle. Users with the right roles should see the New Application button Click on the New Application button opens a modal that prompt the user for some fields: Name : This is the name of application as displayed in alien 4 cloud. It is required and should be meaningful for users. The name of an application must also be unique in alien 4 cloud. The name of an application can also be changed later when editing the application. Archive name (Id) : This is the unique identifier of the application In alien 4 cloud an application will have TOSCA topologies to describe what to deploy and how to deploy it. Every TOSCA topology has a matching TOSCA archive with an unique archive name and archive version. The id of an application in alien 4 cloud is also the name of the TOSCA archive. Note that this name must be unique. Description : Description is optional and will be displayed to users in the application list. Initialize topology from : When creating a new application alien 4 cloud will create a default Environment and a default Version. The default version will have an associated default Topology version. It is possible to create a new blank topology (scratch - screenshot above) or to look for an available topology template in the catalog (see screenshot below). Template and workspace limitation It is not yet possible to create an application from a template that is not in the public global workspace. The reason is that once created to the application the topology should have visibility to all components used in this templates so basically in any dependent archive (that may also be restricted to the private workspace). We don’t yet support the request for promotions of the dependencies of a template that uses private archives and decided to disable the ability to create applications from private templates. This behavior will be improved in future versions. "},{"title":"Deploy an environment","baseurl":"","url":"/documentation/1.4.0/user_guide/application_deployment.html","date":null,"categories":[],"body":" In alien4cloud you actually deploy an environment of an application, in order to prepare and trigger your deployment first go to the deployment page Before deploying your environment you have to configure the deployment, and alien4cloud will drive the user into comprehensive sequential steps in order to archive it. Each step perform a validation of the deployment topology, and errors details are eventually displayed on the right screen. Note that you can not go to the next step as long as the current one is still not valid. Inputs Inputs is an efficient way to configure environment specific properties that may be shared by a single topology, or to let the user(s) responsible for deployment configure some of the deployment properties without having to deal with the complexity of the topology editor and all of it’s components. There is two types of inputs: Properties : For example the designer may choose to let the deployer configure Number of CPUs, JAVA VM heap etc. Artifacts : For example a license file, initial data file, configuration file for a software etc. Inputs may be optional or required, if any required input is not defined alien4cloud will display a todo list and prevent the user for going to the next configuration step. Once all required inputs are defined, the location selection step is unlocked. Location selection Location selection allow the deployment user to select where he wants to actually deploy the application. Alien 4 cloud will display to the user a list of locations that are authorized for the user to deploy on. The alien4cloud admin is responsible for configuration of the locations and for granting access to them. Note that the access may be configured per user or per application/ application environment meaning that, as a user, you may see some locations available for some of your environments and not for some others. If you feel that a location you need to deploy your application is missing you should ask for permissions to your alien4cloud admin. You can select among the displayed location, the one on which you would like to deploy. The proposed locations are determined by matching every existing location against the topology, done by a matcher plugin. For now, note that if no matching plugin is configured by the administrator, a default matcher is used, checking the following: Supported artifacts : The orchestrator managing a location can support all the artifacts contains in your topology (nodes and relationships implementations scripts) Authorizations : The current user / application / environment have sufficient rights to deploy on the location Node substitution Next step is to substitute some abstract nodes from your topology with resources provided by the selected location. In the meantime, you can edit some properties if you need to. Deploy This is the last step. If the orchestrator defined some deployment properties, here is the place to fill them up. You can also decide (if possible) if you want to expose your deployment as a service. ( More about services here… ) A final validation is made, taking into account everything that has been configured up until now, and eventually errors are displayed. If your topology is valid and ready for deployment, you can hit the deploy button to proceed. You can now follow the deployement progress on the runtime view . Update Once an application has been successfully deployed, you can upgrade it by hitting the button. Upgrading a deployment means adding/removing/changing nodes and/or relationships in a deployed topology . This can be done : On the same location : Only if the currently selected location is the same on which the deployment has been made In an incremental development mode : your application has been deployed, you add / remove some nodes in your topology, then you can update the deployment in order to deploy your changes. Between versions : you have already deployed a V1 of your application in production. You have worked on a V2 and have successfully tested it. You want to push the delta in production environment, you can use the upgrade feature to deploy the V2 in your production environment (instead of undeploying V1 then deploying V2). Since this feature strongly depends on underlying orchestrator, you should refer to the dedicated documentation portion of the orchestrator you are using to know more about this feature ( for Cloudify orchestrator ). Since 1.4.1, in addition to the update process, alien4cloud will, right after the update automatically trigger the post_update workflow in case one is defined in the original topology and in the updated topology . Note that while we decided to add this option in 1.4.1 version we also decided to apply to this option the same limitation that exist in Cloudify meaning that, as no custom workflow can be updated with this orchestrator you won’t be able also to update or add a post_update workflow in case it is not defined in the initial topology. However we will trigger the workflow only if it is still defined in the updated topology meaning that while you cannot change or add this behaviour you may decide to stop applying it. "},{"title":"Manage environments","baseurl":"","url":"/documentation/1.4.0/user_guide/application_environments.html","date":null,"categories":[],"body":"In the environment management page you can create, edit or delete an environment. The version and the cloud are the most important informations. An environment cannot be deleted when it’s application is still deploy. "},{"title":"Access your applications","baseurl":"","url":"/documentation/1.4.0/user_guide/application_list.html","date":null,"categories":[],"body":"To reach the list of applications you have access to you should click on the button in the main navigation bar. The screen shows you the list of applications you can access (basically any application you have a role assigned to). If you are a platform admin you can see all applications on the list. For each application you can toggle the display of it’s environments to see the state of the deployment as well as the deployed version. Just click on an application to reach the application detail page where you will be able to see application general information and reach the deployment outputs (application url etc.). "},{"title":"Application(s) management","baseurl":"","url":"/documentation/1.4.0/user_guide/application_management.html","date":null,"categories":[],"body":"An application in alien4cloud is something that you will be able to describe and deploy making it accessible to other users for consumption. To actually realize the deployment multiple people will contribute to the application with various roles in respect to the perimeter they operate. The application section is accessed through the applications button in the main navigation bar . To understand the application concept, please refer to the related concept section . APPLICATIONS_MANAGER and alien4cloud ADMIN will be interested to start with the following section: Create a new application If you are an APPLICATION_MANAGER you may be interested into the following: Grant authorizations : allowing you to manage roles. Manage versions : explains how you can create new application versions. Manage environments : guide you through the creation and management of environments for your application and especially how you set the next version for a given environment If you are an APPLICATION_DEVOPS you should look at the topology editor section to learn how to edit the topologies of the application versions. To configure and deploy environment(s) of the application look at the Configure and deploy Managing runtime "},{"title":"Post deployment operations","baseurl":"","url":"/documentation/1.4.0/user_guide/application_post_deployment.html","date":null,"categories":[],"body":" Premium feature This section refers to a premium feature. Once an application is deployed, we might need to be able to upgrade some file (config, binary, licence file) related to a given component without modifying the topology itself (no change on relationships, no node added, no instances added), or maybe just to execute a custom operation to put or get some informations from the deployed nodes. The Cloudify 3 premium plugin plugin allows you to perform such actions. When the plugin is enabled, you can see patches and operations sub menu on the left side bar on the runtime view. The two concepts are very similar in some ways. What it is? Functionally speaking, a patch / operation can be defined as a set of actions that will be executed on a node, for example upgrading a version or the configuration of a component, accessing some informations about a component, etc…. It is linked to a specific version of a node, and can be triggered for one or more instances of that node. Technically speaking, it can be a script file, or a zipped set of files, that one will upload once the topology is deployed, via the provided user interface. Supported format are: bash script for linux, power shell for windows, zip and tar.gz . If you provide an archive, you must make sure to have only one script file at the root. If you need to provide additional scripts, just put them in subfolders and refer to them using relative path (at execution stage, the current working folder will be the root of the archive). What it is not? This feature is not a hot upgrade assistant: it’s your responsibility to enhance your TOSCA components once you have provided a patch for a given component. A real life scenario could be: you write TOSCA components. you assemble them in a topology V1. you deploy your application. you provide a patch that change a config for example. in parallel, you upgrade your component in order to integrate the changes, and release a new version of this component. you upgrade your topology to make it use your new component version. By the way you will probably release a new version V2 of your application topology. Then, if you redeploy the application V1, the patches will be executed but if you deploy V2, they won’t be (since a patch is associated to a component version). Creation Requirements To use this feature, your orchestrator should be configure with a functional URL for the post-deployment application . Once you have your script or archive, you can upload it. In the modal, choose a name for your operation, eventually add a description, and select a node. Note that in the list of nodes, there are not nodes such as Compute , Network and Storage nodes (nodes provided by the IAAS). Execution Alien4Cloud interface allows the user to trigger a patch/operation execution for one or all instances of a node. There are somehow specific behaviors. Known issue Actually, the result we see on the UI doesn’t represent the real result of the execution of the operation or patch command. Operation execution An operation can be triggered as many time as we want on a node. Patches execution As stated above, triggering a patch execution is possible via the provided user interface. However and most importantly, once a patch is added for a node, it will be triggered automatically on: all the instances of the node, in case of fail-over all the newly created instances in case of scaling up. If the execution is successful, the patch is acknowledged and you can see it on the view. TODO : image patch acknowledged Once a patch is executed and acknowledged on an instance, it will not be executed again even if it is triggered. Node properties and attributes Some inputs are auto-generated when the operation is added while blueprint is generated, so for a given node, in any patch or post deployment operation, you can access the node’s properties and attributes. Attribute names are prefixed by self_attribute_ and property names by self_property_ . Like any inputs in any operation implementation script, you can access them in your script using environment variables. Deletion You can remove a patch / operation if you need to via the provided user interface. However, note that if the patch / operation has already been executed on some instances, deleting it will not undo the changes on those instances . Also, when an application environment is deleted, all patches and operations related to a deployment with that environment are deleted. "},{"title":"Grant authorizations","baseurl":"","url":"/documentation/1.4.0/user_guide/application_roles.html","date":null,"categories":[],"body":"To Grant authorizations a platform ADMIN or a user with the APPLICATION_MANAGER role for the given application must go to the target application and click on the button of the left menu bar. Roles information The list of roles available for the application or it’s environment and their explaination is available at the bottom of this page. Some roles are specified for the whole application, for example the APPLICATION_MANAGER role and APPLICATION_DEVOPS (that allow edition of the topologies to deploy) are available for the whole application. Some roles however like the deployment role are specified per environment. In order to assign a rôle to a user just click on the key button on the user row and assign the role of your choice. you can also assign a rôle to a user group in the same way by going on the Goups tab. For environment related roles just click on the environment choice dropdown and select the environment of your choice. Application’s roles These roles defines actions allowed by role on a given application : Role Description APPLICATION_MANAGER Application manager can manage the application configuration, it’s versions and environments as well as user management for the application. APPLICATION_DEVOPS Dev ops role should be given to the applications developer. In ALIEN users with devops role on an application can edit the topologies of every SNAPSHOT versions. Environment’s roles In addition to the applications roles, Application manager can specify some roles related to every single environment defined for the application. These roles defines actions allowed by role on a given environment : Role Description APPLICATION_USER An application user on an environment is allowed to see the environment status as well as having access to the deployment output properties. DEPLOYMENT_MANAGER Deployment manager for an environment is responsible for configuration and deployment/undeployment of an environment. In order to be able to deploy/undeploy the environment the user must also have a DEPLOYER role on the location on which he wants to deploy. DEPLOYER role is configured on the location configuration by any user having the global ADMIN role. "},{"title":"Application runtime","baseurl":"","url":"/documentation/1.4.0/user_guide/application_runtime.html","date":null,"categories":[],"body":" On this runtime submenu view Application > Runtime , you can have the detailed deployment progress. The previous picture was taken during a Wordpress deployement, to deploy your own Wordpress, please refer to this section . Logs Premium feature This section refers to a premium feature. You can access to the logs view by a submenu of the runtime view. In this page you can see deployments logs in alien4cloud. (1) You can search for logs, and filter them by date. Some facets are also available to search specific logs: (2) You can dynamically tails the lasted logs To add or remove the log information in the table, click on the cogs button of its first line. A modal will appear and you will choose your columns. Scaling TODO how to scale Launching operations TODO how to trigger an operation execution "},{"title":"Manage versions","baseurl":"","url":"/documentation/1.4.0/user_guide/application_versions.html","date":null,"categories":[],"body":"Version numbers follows the maven convention i.e < major >.< minor >.< incremental >-< qualifier >. Every version that contains the string -SNAPSHOT is recognized as SNAPSHOT. This means that in alien4cloud, just like in maven a version as 1.0.0-SNAPSHOT-ALPHA or 1.0.0-ALPHA-SNAPSHOTfoo is recognized as SNAPSHOT and can be modified. We however recommend you to keep -SNAPSHOT at the end of the version string. When creating topology variants you will assign a qualifier to the variant. Alien 4 cloud will automatically add the variant qualifier as first qualifier in the variant version string. So if your version is 1.0.0-SNAPSHOT and your variant qualifier is DEV the version number will be 1.0.0-DEV-SNAPSHOT, if the version number was 1.0.0-SNAPSHOT-ALPHA the variant version number will be 1.0.0-DEV-SNAPSHOT-ALPHA etc. Configure versions While alien 4 cloud creates a default version, you will soon have to create new versions for your application. In alien 4 cloud a version can have multiple topologies variant that we call Topology Versions. To manage Versions and Topology versions you must go to the application version management screen. To do so you must have the APPLICATION_MANAGER role for the application (not to be confused with the global APPLICATIONS_MANAGER role) or the global ADMIN role. From the application list screen click on the application for which you want to manage versions and then click on the version button in the applications left side-bar menu. This screen displays all the versions of the application (by default only a single 0.1.0-SNAPSHOT version is created) and for each version the list of it’s topology variants and their unique version number. Create new version You can create a new version by clicking the New version button . Once clicked the new version modal will open so you can configure the new version. Version number : This is the number of the new version to create. It must be unique for this application and must follow the maven (and TOSCA) version pattern. Description : Optional description for this version. Initialize topology from : When creating a new version alien 4 cloud allow you to initialize one or multiple topology versions for this application version. The default option is ( Previous version ). This option allow you to duplicate all the topology versions from a previous application version for the new application version. When choosing the template creation, only a single application topology version will be created. The associated topology will be based on the selected template. When choosed, only a single application topology version will be created. The associated topology will be empty. Update version The description field of a version can be updated anytime. However it is not the case for the version number. First of all, a released version number CANNOT BE UPDATED . Therefore, make sure your version is a SNAPSHOT one before trying to update. The table below sumarize the cases when a version number update can be done: State Description Updatable Unused The version is not yet assigned to an environment YES Assigned The version is assigned to an environment and maybe configured for a future deployment. YES Deployed The version is assigned to a deployed environment NO Exposed as Service The version is assigned to an environment (deployed or not), which is exposed as a service. NO Delete version Deletion of a version will remove all topology versions and associated topologies. It can be achieved through the trash button on the same line as the version you want to delete. Create new topology version You can create a new variant topology for an application version by clicking the plus button on the same line as the version for which to create a topology version/variant. This opens the new topology version modal: Qualifier : Creation of a new topology version requires the configuration of a specific qualifier for this topology version/variant. The generated version number is displayed on the side of the qualifier field. Description : Optional description for this topology version/variant. Initialize topology from : When creating a new topology version alien 4 cloud allow you to initialize it’s associated topology. The default option (Previous version) allow you to initialize the topology from the one of a single topology version (either from the same version or another version of the application). Of course you can also choose to create the version from a template or from scratch. Delete topology version Deletion of a topology version will also delete it’s associated topologies. It can be achieved through the trash button on the same line as the topology version you want to delete. "},{"title":"Applications","baseurl":"","url":"/documentation/1.4.0/concepts/applications.html","date":null,"categories":[],"body":"Alien 4 Cloud aims at managing application lifecycle and their related deployments. Applications in Alien 4 Cloud are visible only by users that have some roles within the application. The application in Alien 4 Cloud is the entity that people are going to deploy. Every application can have one or more versions and one or more environments. Version A version of an application answer the question what do we want to deploy. Every application version defines the actual service that a given version of an application is going to deliver. A version represents a given state for the application topology. As we explained already a topology contains versioned informations for all components required to deploy the application meaning that a defined version of an application can be moved from a cloud to another with guaranty on the deployment content and insurance that the same components will be deployed. That said you may sometimes need the ability to define one or multiple topologies for a given versions in order to suit some of your environment constraints: - For example you may want to use in development a topology version that would use hsql as a database implementation while node while in production you will use a topology with an Oracle database (that requires licenses and so on). Of course every topology version for a given application version should provide the same service, differences between these topologies being mostly technical. And for sure in an ideal world you would have a single topology version that you will deploy on every environment changing only some deployment configurations like scaling parameters for example. Snapshot and release When you create an application, Alien 4 Cloud creates a default version 0.1.0-SNAPSHOT . The qualifier SNAPSHOT is really important and means somehow In development . Indeed Alien 4 Cloud will prevent any modification of an application topology that is not a SNAPSHOT version. When you are ready to release a version just rename it and remove the SNAPSHOT qualifier (for example rename 0.1.0-SNAPSHOT to 0.1.0 ). Alien will then consider the version as released and it will not be possible to update the version. If you want to change the topology you will have to create a new version for your application (based on the previous version if you like). Environments An environment represents a deployment target for an application. Every environment may be owned and deployed by different team. That way you can offer the ability for your development, uat, and production team to efficiently work together. Application environment is also a key feature to design your application lifecycle across the different environments and, eventually, clouds. For example you can design one or more development environments for your developers (on EC2 for example), and the pre-production and production environments on your own OpenStack(s). You can then move a version from an environment to another by switching the version on the environments and re-deploying it. Like for application version, a default application environment named “Environment” is created when you create your application. This new environment is configured to target the default created version but without any associated cloud. You can specify the cloud in the environment management page or in the deployment page. You can also add a type to your environment and write a description. Every environment have a topology version associated that defines the next version that will be deployed to this environment, the same topology version may be associated to one or multiple environments or to none of them. Application lifecycle management, specific configurations and deployment. In summary, the combination of version and environment concepts offers the ability of manage the lifecycle of your application. The combination of an environment and a version have a specific deployment configuration. This configuration consist of multiple elements: - Location matching configuration : This is the selection of the deployment target for this environment/version (Like Amazon EC2, my internal cloud, my set of VMs etc.) - Node matching configuration : When a topology contains abstract nodes they can be replaced before deployment by a concrete implementation, this is really the key element for topology portability across clouds. For example if I selected Amazon as my deployment location in the first step I will be able to select all matching Amazon Images and Flavor association to replace my Compute node. On an existing machine cluster I will match the Compute node against some available machines in the pool. On container based deployment target I will target a container image to deploy my Compute node etc. Node matching can replace some abstract nodes with either ‘on demande resources’ or ‘services’ which are already running elements that are available for me to consume. - Inputs configuration : A topology may define some input properties and input artifacts that you can configure for this environment/version deployment association. Finally once all theses elements are configured you can perform a deployment. "},{"title":"Artifact definition","baseurl":"","url":"/documentation/1.4.0/devops_guide/tosca_grammar/artifact_definition.html","date":null,"categories":[],"body":"An artifact definition defines a named, typed file that can be associated with Node Type or Node Template and used by orchestration engine to facilitate deployment and implementation of interface operations. Keynames Keyname Required Type Description tosca_definitions_version type no string The optional data type for the artifact definition. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 description no string The optional description for the artifact definition. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 mime_type (1) no string The optional Mime type for finding the correct artifact definition when it is not clear from the file extension. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 repository no string The optional name of the repository definition which contains the location of the external repository that contains the artifact. alien_dsl_1_3_0 tosca_simple_yaml_1_0 deploy_path (2) no string The file path the associated file would be deployed into within the target node’s container. Deploy path is valid only for a deployment artifact and not in context of an implementation artifact. N.A. (1) Note that while alien4cloud parser handles mime types correctly they are however not taken in account in any processing way yet. (2) Current implementation of Alien 4 Cloud does not take the deploy_path in account but exposes an environment variable that contains the local path in which alien4cloud has placed the file. Getting the artifact from your scripts : Alien 4 Cloud does not support get_artifact function to retrieve a specified artifact. It will however provide an input environment variable named based on the artifact name in all scripts of the node/relationship that defines the artifact. The value of the environment variable is equal to the local path of the file. Grammar # Simple form - type and mime inferred from file URI <artifact_name> : <artifact_file_URI> # Qualified form - type and mime explicit <artifact_name> : <artifact_file_URI> type : <artifact_type_name> description : <artifact_description> mime_type : <artifact_mime_type_name> Example The following example shows how to define a node type with operation: node_types : fastconnect.nodes.OperationSample : artifacts : - scripts_directory : scripts type : tosca.artifacts.File description : Directory that contains all scripts. "},{"title":"Artifact type","baseurl":"","url":"/documentation/1.4.0/devops_guide/tosca_grammar/artifact_type.html","date":null,"categories":[],"body":"An Artifact Type is a reusable entity that defines the type of one or more files which Node Types or Node Templates can have dependent relationships and used during operations such as during installation or deployment. Keynames Keyname Required Type Description tosca_definitions_version derived_from no string An optional parent Artifact Type name the Artifact Type derives from. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 version (1) no version An optional version for the Entity Type definition. N.A. metadata (2) no map of string Defines a section used to declare additional metadata information. alien_dsl_1_3_0 tosca_simple_yaml_1_0 tags (2) no map of string Defines a section used to declare additional metadata information. alien_dsl_1_3_0 alien_dsl_1_2_0 description no string An optional description for the Artifact Type. mime_type no string The required mime type property for the Artifact Type. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 file_ext no string[] The required file extension property for the Artifact Type. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 properties no map of property definitions An optional list of property definitions for the Artifact Type. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 (1) version at type level is defined in TOSCA but they are optional and there is no example on how it should be managed. We believe in alien4cloud that versions should be managed at the service template/archive level and dispatched to every elements defined in the service template/archive. (2) metadata appeared in TOSCA while alien4cloud already had tags supported, support for metadata keyword has been added in 1.3.1 version. note that if you specify both metadata and tags one may silently override the other (this should be avoided). Grammar <artifact_type_name> : derived_from : <parent_artifact_type_name> metadata : <map of string> description : <artifact_description> mime_type : <mime_type_string> file_ext : [ <file_extension_1> , ... , <file_extension_n> ] properties : <property_definitions> See: property_definitions Example The following example shows how to define a node type with operation: my_artifact_type : description : Java Archive artifact type derived_from : tosca.artifact.Root mime_type : application/java-archive file_ext : [ jar ] "},{"title":"Attribute definition","baseurl":"","url":"/documentation/1.4.0/devops_guide/tosca_grammar/attribute_definition.html","date":null,"categories":[],"body":"An attribute definition defines a named, typed value that can be associated with an entity defined in this specification (e.g., a Node Type or Relationship Type). Specifically, it is used to expose a value that is set by the orchestrator after the entity has been instantiated (as part of an instance model). Typically, this value can be retrieved via a function from the instance model and used as inputs to other entities or implementation artifacts. Keynames Keyname Required Type Description tosca_definitions_version type (1) yes string The required data type for the attribute. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 description no string The optional description for the attribute. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 default no N.A. An optional key that may provide a value to be used as a default if not provided by another means. This value SHALL be type compatible with the type declared by the attribute definition’s type keyname. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 status (2) no (default supported) string The optional status of the attribute relative to the specification or implementation. See supported status values defined under the property definition section. N.A. entry_schema (1) no string The optional key that is used to declare the name of the Datatype definition for entries of set types such as the TOSCA list or map. N.A. (1) Alien 4 cloud currently support primitive types only on an attribute. Entry schema is therefore not supported as it is used only to specify the type of entries for the complex list and map types. (2) Status has been added in the latest versions of the specification and is not yet supported in alien4cloud. Table below details the supported values as defined in the TOSCA specification. Grammar <attribute_name> : type : <attribute_type> description : <attribute_description> default : <attribute_default_value> Example The following example shows how to define a node type with attributes: node_types : fastconnect.nodes.AttributeSample : attributes : property_1 : type : string property_2 : type : string description : this is the second attribute of the node default : This is the default value of the attribute property_3 : type : integer default : 45 "},{"title":"Backup & restore","baseurl":"","url":"/documentation/1.4.0/admin_guide/backup_restore.html","date":null,"categories":[],"body":"Scope of the tool The purpose of this tool is to snapshot Alien4Cloud data and restore a previous snapshot. The backup and restore tool is responsible to backup alien4cloud data: Alien4Cloud database (Elasticsearch) User uploaded content like CSAR, Artifacts, Plugins But Alien4Cloud distribution binaries (excluding plugins) and configuration files won’t be backed up. Download Backup / Restore tool Configurations Unzip the downloaded archive, and edit the file path_to_unzipped_tool/config/config.yml . config.yml elasticsearch : # Name of your elasticsearch cluster cluster_name : alien4cloud # Addresses of elasticsearch cluster nodes addresses : localhost:9300,129.185.67.26:9300 # Where Alien4Cloud's backup files are stored backup.files_dir : /opt/alien4cloud/backups/files # Where Alien4Cloud's files are stored, backup operation will copy data from alien4cloud.dir to backup.files_dir and restore will do inversely alien4cloud.dir : /opt/alien4cloud/data Configure Elasticsearch The backup relies on the snapshot capability of Elaticsearch. In order to be able to use this feature, Elasticsearch variable ‘path.repo’ must to be defined on all elasticsearch cluster nodes. path.repo : /home/elasticsearch/backups Restart elasticsearch so that the new configuration is taken into account : sudo /etc/init.d/elasticsearch restart Configure shared file system (optional) Mount shared file system between all your elasticsearch cluster nodes. Here’s an example with sshfs : # On the file server machine where elasticsearch backups will be hosted sudo adduser elasticsearch # Copy key file that enable ssh login for this user sudo -u elasticsearch mkdir /home/elasticsearch/.ssh sudo -u elasticsearch cp authorized_keys /home/elasticsearch/.ssh # Create the shared remote folder that will be used for elasticsearch backups sudo -u elasticsearch mkdir /home/elasticsearch/backups # On elasticsearch machines # Install sshfs sudo apt-get install sshfs # Create backup folders sudo -u elasticsearch mkdir -p /home/elasticsearch/backups # Mount the remote folder sudo sshfs -o allow_other -o uid = $( id -u elasticsearch ) -o gid = $( id -g elasticsearch ) -o IdentityFile = /home/elasticsearch/key.pem elasticsearch@192.168.1.4:/home/elasticsearch/backups /home/elasticsearch/backups # Test that elasticsearch can write to the backups folder sudo -u elasticsearch touch /home/elasticsearch/backups/test.txt sudo -u elasticsearch rm /home/elasticsearch/backups/test.txt Backup To backup Alien4Cloud, from the root directory of the unzipped tool, perform the command: ./backup-restore-tool.sh -backup -n yourBackupName For more commands and options, you can have the help doc displayed: ./backup-restore-tool.sh -help Restore Alien4Cloud and ElasticSearch states We recommend to stop Alien4Cloud before performing the restore. ElasticSearch MUST be up and running . Alien4Cloud should be restarted once the restoration process is completed. This is quite trivial to do when running in a classical production setup where elasticsearch process is independant from Alien4Cloud ( See advanced configuration for more details ). However, if running in an embedded configuration, you can’t stop Alien4Cloud without stopping ElasticSearch. Then, just make sure the plateform is not used during the process. Anyway, if you 100% sure that restore operation has no impact on clouds or plugins configuration you can perform a ‘hot restore’ and don’t need to stop Alien4Cloud. Backup/snapshot access rights The Elasticsearch backup directory (path.repo: /home/elasticsearch/backups) must be accessible by the ElasticSearch process in order to be able to restore the data. To restore Alien4Cloud, from the root directory of the unzipped tool, perform the command: ./backup-restore-tool.sh -restore -n yourBackupName Once data is restored, you can restart Alien4Cloud server if needed. "},{"title":"Capability definition","baseurl":"","url":"/documentation/1.4.0/devops_guide/tosca_grammar/capability_definition.html","date":null,"categories":[],"body":"A capability definition defines a named, typed set of data that can be associated with Node Type or Node Template to describe a transparent capability or feature of the software component the node describes. Keynames Keyname Required Type Description tosca_definitions_version type yes string Type of capability or node that is required by the current node. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 description no string The optional description of the Capability Type. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 properties no list of properties values Properties values for the properties defined on the capability type. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 attributes no list of attributes values Attributes values for the attributes defined on the capability type. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 valid_source_types (1) no string[] An optional list of one or more valid names of Node Types that are supported as valid sources of any relationship established to the declared Capability Type. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 occurrences range no Specifies the boundaries of client requirements the defined capability can serve. A value of unbounded indicates that there is no upper boundary. Defaults to [0, unbounded]. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 (1) Valid source types is not supported in alien4cloud. We intend to support valid source types in future releases but believe that there should not be reasons to restrict the usage of a capability to a specific node as it could restrict the reusability of the node. Grammar # Simple definition is as follows: <capability_defn_name> : <capability_type> # The full definition is as follows: <capability_defn_name> : type : <capability_type> description : <capability_defn_description> properties : <property_values> attributes : <attribute_values> occurrences : <occurrences> Example node_types : fastconnect.nodes.CapabilitySample : capabilities : # Simple form, no properties defined or augmented test_capability : mytypes.mycapabilities.MyCapabilityTypeName # Full form, augmenting properties of the referenced capability type some_capability : type : mytypes.mycapabilities.MyCapabilityTypeName properties : limit : 100 occurrences : [ 0 , 3 ] "},{"title":"Capability filter","baseurl":"","url":"/documentation/1.4.0/devops_guide/tosca_grammar/capability_filter_definition.html","date":null,"categories":[],"body":"A capability filter definition defines criteria for selection of a TOSCA Node Template based upon the template’s capability properties. Keynames Keyname Required Type Description tosca_definitions_version properties no map of [property filter definition] An optional sequenced list of property filters that would be used to select (filter) matching TOSCA capability based upon their property definitions’ values. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 Grammar capabilities : - <capability_name_or_type_1> : properties : - <cap_1_property_filter_def_1> - ... - <cap_m_property_filter_def_n> - ... - <capability_name_or_type_n> : properties : - <cap_1_property_filter_def_1> - ... - <cap_m_property_filter_def_n> Example my_node_template : # other details omitted for brevity requirements : - host : node_filter : capabilities : # My “host” Compute node needs these properties: - host : properties : - num_cpus : { in_range : [ 1 , 4 ] } - mem_size : { greater_or_equal : 512 MB } "},{"title":"Capability type","baseurl":"","url":"/documentation/1.4.0/devops_guide/tosca_grammar/capability_type.html","date":null,"categories":[],"body":"A Capability Type is a reusable entity that describes a kind of capability that a Node Type can declare to expose. Requirements (implicit or explicit) that are declared as part of one node can be matched to (i.e., fulfilled by) the Capabilities declared by other node. Keynames Keyname Required Type Description tosca_definitions_version derived_from no string An optional parent Capability Type name the Capability Type derives from. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 version (1) no version An optional version for the Entity Type definition. N.A. metadata (2) no map of string Defines a section used to declare additional metadata information. alien_dsl_1_3_0 tosca_simple_yaml_1_0 tags (2) no map of string Defines a section used to declare additional metadata information. alien_dsl_1_3_0 alien_dsl_1_2_0 description no string An optional description for the Capability Type. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 properties no map of property definitions An optional list of property definitions for the Capability Type. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 attributes no map of attribute definitions An optional list of attribute definitions for the Capability Type. Not supported on alien4cloud. N.A. valid_source_types no string[] An optional list of one or more valid target entities or entity types (i.e., a Node Types or Capability Types). alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 (1) version at type level is defined in TOSCA but they are optional and there is no example on how it should be managed. We believe in alien4cloud that versions should be managed at the service template/archive level and dispatched to every elements defined in the service template/archive. (2) metadata appeared in TOSCA while alien4cloud already had tags supported, support for metadata keyword has been added in 1.3.1 version. note that if you specify both metadata and tags one may silently override the other (this should be avoided). Grammar <capability_type_name> : derived_from : <parent_capability_type_name> description : <capability_description> properties : <property_definitions> See: property_definitions Example The following example shows how to define a node type with operation: mycompany.mytypes.myapplication.MyFeature : derived_from : tosca.capabilities.Feature description : a custom feature of my company’s application properties : my_feature_setting : type : string my_feature_value : type : integer "},{"title":"TOSCA catalog","baseurl":"","url":"/documentation/1.4.0/user_guide/catalog.html","date":null,"categories":[],"body":" TOSCA types are referred as components in alien4cloud. High level concepts are detailed in this section . Introduction TOSCA is at the heart of Alien 4 Cloud, and so is the TOSCA Catalog feature. TOSCA is an open standard from OASIS that allows to define components for cloud deployments in a reusable and eventually agnostic fashion. The goal of TOSCA is to let users provide building blocks called Types to define the desired topologies from a very abstract level to a very concrete level allowing the actual deployment of the topology. Any abstract element in a topology has to be replaced with concrete implementations in order to allow the TOSCA deployer to actually perform the deployment. Most of TOSCA implementations provides their own implementations for some of the nodes (like the normative ones defined within the standard). For more informations on TOSCA and supported archive format please go here . TOSCA Catalog Alien 4 Cloud TOSCA Catalog is an index of components/elements defined in a TOSCA archive. Among these elements we find two main categories Types (reusable building blocks) and Topologies (Composition and definition of the building blocks to define what a user want’s to deploy). When adding or creating a TOSCA archive in Alien 4 Cloud the archive is automatically store on a File System but also indexed to provide browsing and search features in your various archives and truly making them reusable for all the people working on the alien instance! Accessing the catalog Every user with the role COMPONENT_BROWSER can actually browse the global catalog both to look types or topology templates. Archive meta-data In addition to the types and topology in an archive we also index an object that represents the archive and it’s meta-data. This is referenced in alien as the CSAR (for Cloud Service ARchive). TOSCA Types The first element indexed in alien 4 cloud are the TOSCA Types. Amongst them we find some high level types used to ease reusability when creating other types: Artifact types Capability types Data types Interface types And some types that can actually get instanciated in a topology: Node types (the main building blocks) Relationship types (elements that can define how relations between nodes can actually be implemented) Topologies Topologies are the second element indexed in alien 4 cloud. While a TOSCA archive may contains multiple types a single topology can be defined in an archive. The id of a topology in alien4cloud is the same id of the enclosing archive Id. "},{"title":"Artifact repositories","baseurl":"","url":"/documentation/1.4.0/user_guide/catalog_artifact_repositories.html","date":null,"categories":[],"body":" How the repositories are managed When you upload a CSARs with a repository inside, Alien try to fetch the artifact from the remote repository. If this type is not supported or if the artifact is not available (wrong URL or wrong credential), an error is throw during the parsing. In a CSAR, I can reference a repository by this URL. In the components view you can define new repositories artifact configuration. This configuration offers you the ability to add credentials to your artifact resolver (which is in charge to fetch your remote artifact). By storing repositories artifact configuration (repository URL and credentials) into Alien4Cloud database this allow you to create a CSARs without hard-coding repositories password multiple times. Alien4Cloud will be able to retrieve the password using the repository URL. But bear in mind the passwords are stored in plain text and can be seen by anyone accessing Alien4Cloud database. Tosca support To use repositories in your CSARs use tosca definitions version alien_dsl_1_3_0 or greater. Click on to create a new repository. You can then browse the created repositories: Http HTTP plugin resolver is the only one opensource plugin repository. The concat of the repository URL and the artifact file attribute should be the complete path to your file. Example : repositories : fastconnect : url : https://fastconnect.org/maven/service/local/repositories/opensource/content type : http [ ... ] node_types : alien.nodes.Example : artifacts : - http_artifact : file : alien4cloud/alien4cloud-cloudify3-provider/1.4.0-SM2/alien4cloud-cloudify3-provider-1.4.0-SM2.zip repository : fastconnect type : tosca.artifacts.File Repositories specific to the premium version Two repositories plugins are premium : git and maven . All repositories plugins are package in the Alien 4 cloud premium dist. Git In git, the reference of an artifact is this path inside the git project. If your repo as a new commit between two deployments, Alien will redownload your artifact. To refer to your artifact file, use the following syntax : <branch or tag>:<file path> . If you don’t specific a branch or a tag, the default branch ‘master’ will be used. Example : repositories : aliengithub : url : https://github.com/alien4cloud/samples.git type : git [ ... ] node_types : alien.nodes.Example : artifacts : - git_artifact : file : master:demo-repository/artifacts/settings.properties repository : aliengithub type : tosca.artifacts.File Maven In maven, you need to use the following syntax to refer to your artifact file : <group>:<artifact>:<version>:<classifier>@<extension> . If your maven artifact as no SNAPSHOT maven classifier, Alien 4 cloud will download your file the first time and only this time. Conversely, if your artifact as a SNAPSHOT classifier and has changed between two deployments, Alien will redownload your artifact. Example : repositories : fastconnect_nexus : url : https://fastconnect.org/maven/content/repositories/opensource type : maven [ ... ] node_types : alien.nodes.Example : artifacts : - maven_artifact : file : alien4cloud:alien4cloud-cloudify3-provider:1.2.0@zip repository : fastconnect_nexus type : tosca.artifacts.File "},{"title":"Custom On-demand Resources Nodes","baseurl":"","url":"/documentation/1.4.0/user_guide/catalog_custom_resources.html","date":null,"categories":[],"body":"Custom on-demand resources are resources provided by TOSCA catalog but instanciated and managed by the orchestrator. Usually, on-demand resources (computes, block storage, networks …) are provided by the orchestrator as types. They are defined in the locations as nodes to be matched with their abstract versions in topology templates. Only the orchestrator knows how to provide a compute on EC2, OpenStack and so on (depending on orchestrator capabilities) … Since 1.3.1, you can provide your own types as on-demand resources. This page will explain you how to build your own on demand custom resources types and use them as custom resource nodes in your topologies. A custom resource node is defined by the facts that: it is not hosted on another node. it’s type is not provided by the orchestrator. When a node that is identified as a custom resource node is encountered by the orchestrator, its scripts should be executed on the ‘manager’. This is the way the cloudify3 orchestrator manages custom resources. You can find samples in the following CSARs: aws-custom-resources : custom on-demand types that use AWS CLI to provision AWS resources (EC2 instance, MariaDB). aws-ansible-custom-resources : custom on-demand resource that use Ansible to provision AWS resources. On-demand compute To be usable, a compute must expose it’s IP address as an attribute named ‘ip_address’. For the alien4cloud cloudify orchestrator plugin, an optional agent_config complex property can be used to specify: install_method : if an agent should be installed or not on the instance (remote, none). user : the username to use to connect to the instance (default is the one defined while bootstrapping). key : the path to the key (on the manager) to use to connect to the instance (default is the path to key defined while bootstrapping). Known limitations on-demand resources other than computes can’t be scaled "},{"title":"Topology search","baseurl":"","url":"/documentation/1.4.0/user_guide/catalog_topology_search.html","date":null,"categories":[],"body":"Topology list Once you have created / uploaded an archive that contains a topology you should be able to see it in the template list : From now you can use any template when creating a new application . But before that, you might want to edit the topology . "},{"title":"Topology upload","baseurl":"","url":"/documentation/1.4.0/user_guide/catalog_topology_upload.html","date":null,"categories":[],"body":" To understand the topology concept, please refer to this section . Topology template A topology template allows you to create an application structure which we may use for a real application root. You can access to this feature on menu Topology templates and start to create a new template with the topology composer or upload a zip file with your template. Click on and fill in the form. This template name will identify your template and must be unique. And then compose your template in this view : Just drag and drop your zipped topology in the upload area : "},{"title":"Components/types search","baseurl":"","url":"/documentation/1.4.0/user_guide/catalog_type_search.html","date":null,"categories":[],"body":"Alien4Cloud provides ways to browse the uploaded components, with a search engine allowing filters. Roles and security In order to be able to search the repository you must have the COMPONENT_BROWSER role. How to make a simple search On the left of the components list page, there is a search pannel. For a simple search, just type the searched text in the seach field and press the magnifier next to it (or press the Enter keybord instead). The result of your research will be displayed on the center pannel. How to make a filtered search You might wish to filter your search or results, for there are too many components corresponding to the simple search. Still on the left search pannel, you can select one or more filters (facets). You can also remove them if they do not fit you. Note that when more than one filter are selected, Alien4Cloud applies an AND policy. Component overview In order to see components details (description, inheritance, properties, capabilities, requirements, attributes etc.) you can just click on it and the following screen will be displayed: "},{"title":"Components/types upload","baseurl":"","url":"/documentation/1.4.0/user_guide/catalog_type_upload.html","date":null,"categories":[],"body":"You cannot upload the same archive (same id and version) mutliple times. If you changed an archive, you must increment the version number so you can upload it to Alien. Create your own component You can find more information on TOSCA and how you can write new types in TOSCA archives here . There are multiple ways to upload components in alien4cloud for evey one of them you must first go to the section of the main menu. Drag you archive file > Drop it on the dash dotted area Once upload has been completed successfully you should be able to see the node types contained in the archive in the components browsing panel. Click on [Upload CSAR] > Select your archive (The file is automaticly uploaded) Once upload has been completed successfully you should be able to see the node types contained in the archive in the components browsing panel. Alien4cloud allow you to import components from git repositories. Users with COMPONENT_MANAGER role can, from the component section, access the git repository synchronization section by clicking on the left menu . The git repositories management screen allow you to register a git repository to import and trigger the import. In order to register a new git repository click on the button. Then fill the information on the modal as specified below. Repository URL : This is the url of your git repository, for example https://github.com/alien4cloud/samples.git. Note that we support only http(s) urls. Credentials : The username and password that alien4cloud will use to connect to the github repository. On branch / Archive(s) to import : List of branches/sub-folders association to locate the TOSCA archive(s) to import from the git repository. The sub-folders is optional and by default alien will locate all archives within the git repository. Save the repository locally : If false alien4cloud will keep the repository content local, if not the git import process will fetch the repository, process the import in alien4cloud and then remove the local clone of the git repository. If true the repository local clone will not be removed from alien server. Default is false. Alien4cloud is very flexible on the structure of your git repositories and how you keep archives within. We ourself decided to keep multiple archives within the single samples repository. This choice is a really specific choice as we tag the samples branch not following the versions of the archives but to the version of alien the samples support. You may also store multiple archives within a single git repository when all archives are sharing the same version and are packaged together. Note that in that case you have the choice of having a single archive or multiple ones. The choice here should be focused on usability and merge perspective. The devops guide section contains more information on possible choices and example on how and why you should go for one of them. Once imported you can see the new created git location in the list: Click on to trigger the import process. Once completed the import result is displayed with it’s state, and eventual errors, warning or info messages. You can now browse and search for components . Roles and security In order to be able to add components to the repository you must have the COMPONENT_MANAGER role. Note that if the archive you wish to upload contains both tosca types (node types, relationship types etc.) and both a topology template, then you must have both the COMPONENT_MANAGER (for type upload) and ARCHITECT (for template upload) role. Note that Upload issues Alien 4 Cloud performs validation of your archive agains the TOSCA specification. The following image shows the upload of an archive with an error : When deploying on some cloud technologies alien4cloud uses some node template names in the name of the generated ressources (VMs, BlockStorage etc.). Some cloud APIs do not manage special characters as dashes or underscore. In addition some people like to set the hostname based on the name of the node template. Therefore and while this is authorized in TOSCA alien4cloud prevent naming the node template with such characters. If a node template name contains some special character (is: not an alphanumeric character from the basic Latin alphabet and the underscore) we will automatically replace these characters. "},{"title":"Workspaces","baseurl":"","url":"/documentation/1.4.0/user_guide/catalog_workspaces.html","date":null,"categories":[],"body":"Workspaces is a new feature introduced in 1.4.0 version. The goal of workspaces is to provide catalog isolation so that users can upload specific types without sharing them in the global catalog. Thanks to workspaces an application can define not only topologies but also types in their backing archive and benefits from all the indexing features without sharing specific types across the organization or other applications. Premium edition In Premium edition Workspaces provide even more benefits with the ability for every user to have their own workspaces and with all out of the box features to manage promotion of archives from one workspace to another one! See the premium section below for more info! Workspace hierarchy Workspaces are defined in a hierarchy, on top of the hierarchy is the global workspace which basically is the main catalog of components which is managed by users with roles COMPONENT_MANAGER (to add types) and/or ARCHITECT (to add topologies). Before 1.3 version alien had only the global workspace and applications where not allowed to define types. Global workspace App 1 workspace App 2 workspace App 3 workspace ... Constraints on workspaces While workspaces provide isolation between the different sub-workspaces there are constraints that alien4cloud enforce for consistency reasons. An archive with the same name and version can not exist in multiple workspaces. Indeed we don’t want to allow a same name and version to have different content. If the same content has to be shared between multiple entities then it should lie in an upper workspace so ownership and updates potential is clear. An archive in an upper workspace is available for read (COMPONENT_BROWSER) to every child workspace. Only COMPONENT_MANAGER (for types) and ARCHITECT (for topologies) can change an archive in the global workspace. Any user with role APPLICATION_MANAGER or DEVOPS can change types and topologies in the application (theses users have the COMPONENT_MANAGER and ARCHITECT roles on the application workspace). Every user registered with a role on the application can have read (COMPONENT_BROWSER) access to the application archive content. Premium workspaces In premium version of alien 4 cloud workspaces are more flexible and designed to support large enterprise collaboration. In addition to application workspaces the enterprise version introduce user workspaces that allows user to validate the types and design topologies in their own workspaces. Premium version also support the management of archive promotion and relocation across workspaces. Global workspace Group 1 workspace Group 2 workspace User 1 workspace User 2 workspace App 3 workspace User 1 workspace App 2 workspace ... Group workspaces While documented already, 1.3 version does not support group workspaces. Stay tuned for next versions! Premium workspaces catalog Adding an archive to a specific workspace As usual, a user can upload an archive into Alien. If the user doesn’t have the permision to manage many workspaces, the types (or a topology) will be add to the user’s workspace. If the user has authorization to manage some workspaces, he can select one of this workspaces as target workspace for the upload. Workspace relocation If a user with sufficient authorization to manage a workspace (COMPONENT_MANAGER for all types ; ARCHITECT for all topologies ; APPLICATION_MANAGER or DEVOPS for an application) want to change the workspace of an archive for wich he as authorization, he can directly change the workspace of this archive in the catalog. Indeed, a dropdown is available to select the workspace of any archive. An archive cannot be relocate in workspace with less visibility (ie: from the global workspace to a user workspace) if the archive is a dependency for an archive placed in a workspace without visibility on the target workspace. Workspace promotion If a user, who can’t manage the desire target workspace, want to promote one of this archive to another workspace, he need to make a promotion request in the workspace view. A modal will appear and, if the promotion has some impacts, display this impacts. Any user with sufficient right will be able to accept or discard the request in the promotion management view. Delete the workspace plugin Admin user’s can remove the workspace plugin. However a bug can occur if some CSAR are in locate in user or application workspace during this suppresion. To avoid it, we recommed to move all CSAR to the global workspace before this operation. "},{"title":"Certificate generation","baseurl":"","url":"/documentation/1.4.0/admin_guide/certificates.html","date":null,"categories":[],"body":"This is a general purpose guide to generate SSL certificates, this procedure can be used to generate certificates for all Alien4Cloud components that need to secure their communication. Create CA and server certificates First we have to create our own CA, that will be used to sign the certificates. (If you have another CA, then skip this part.) Generate a key pair for the CA certificate, and self sign it openssl genrsa -aes256 -out ca-key.pem 4096 openssl req -new -x509 -days 365 -key ca-key.pem -sha256 -out ca.pem Generate a keypair for the server, and sign it with the CA openssl genrsa -out server-key.pem 4096 openssl req -subj \"/CN=YOUR_DOMAIN_NAME\" -sha256 -new -key server-key.pem -out server.csr ## this will allow to generate a certificate for all computes of you domain, you can also specify IP:YOUR_IP to fix one or multiple IPs echo subjectAltName = DNS: \\* .YOUR_DOMAIN_NAME > extfile.cnf openssl x509 -req -days 365 -sha256 -in server.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out server-cert.pem -extfile extfile.cnf Generate a keypair for a client (exemple: alien4cloud instance), and sign it with the CA If you do not want to require client authentication by certificate (mutual authentication between server and client), you can skip the below step openssl genrsa -out key.pem 4096 openssl req -subj '/CN=client' -new -key key.pem -out client.csr echo extendedKeyUsage = clientAuth > extfile.cnf openssl x509 -req -days 365 -sha256 -in client.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out cert.pem -extfile extfile.cnf Create java truststore and keystore(s) Now we have to create a truststore with our CA, and add the created keys into one or several keystores. Generate the truststore The Java installation comes with a build in truststore that contains most of the public trusted CA of the web. If you are building a key for alien4cloud, you must use that truststore and add in your own CA certificate. That file is usually located at $JAVA_HOME/lib/security/cacert . The default password is changeit . cp $JAVA_HOME /lib/security/cacert ./server-truststore.jks openssl x509 -outform der -in ca.pem -out ca.der keytool -import -alias alien4cloud -keystore server-truststore.jks -file ca.der Generate the server keystore openssl pkcs12 -export -name alien4cloud -in server-cert.pem -inkey server-key.pem -out server-keystore.p12 -chain -CAfile ca.pem -caname root keytool -importkeystore -destkeystore server-keystore.jks -srckeystore server-keystore.p12 -srcstoretype pkcs12 -alias alien4cloud Generate a client keystore If you do not want to require client authentication by certificate (mutual authentication between server and client), you can skip the below step keystore password For some reason, java clients requires that all the keys in a keystore have the same password as the keystore. Then, make sure you use the same password as the client key pair here. openssl pkcs12 -export -name alien4cloudClient -in cert.pem -inkey key.pem -out client-keystore.p12 -chain -CAfile ca.pem -caname root keytool -importkeystore -destkeystore client-keystore.jks -srckeystore client-keystore.p12 -srcstoretype pkcs12 -alias alien4cloudClient Now that you we have the keystores and the truststore, we can configure the server (and eventually the client) to take them into account. "},{"title":"Components","baseurl":"","url":"/documentation/1.4.0/concepts/components.html","date":null,"categories":[],"body":"Components are building blocks that directly refers to TOSCA’s Node Types. Alien4Cloud maintain a catalog of Components that is shared across the platform users. Alien’s component catalog is indexed allowing users to browse, search and use filtering to find the components they need. Thanks to the components available in the platform, architect and application developers will be able to define application topologies (or blueprints). There is multiple types of components in Alien 4 Cloud that directly refers to the TOSCA specification, below is explained the different types of components that you can find and extend. Node Types Node Types are used to define some cloud resources elements (Compute - machine, Block-Storage - persistent disk, Network etc.) as well as software components (Database, WebServer etc.). TOSCA Simple profile in YAML provides some pre-defined Node Types, more informations on the various pre-defined node types can be found here . A node type as every component may expose some capabilities (things that the node provides and that other nodes will be able to consume) and requirements (things that nodes actually requires in order to work correctly). For example a Compute node type (that represents a Machine) has the capability to host some softwares and a Software Component (or any inherited node) requires a Compute on which to be hosted on (installed and run). Relationship Types Relationship types are used to connect nodes. A relationship in TOSCA and Alien is directional, it connects the requirement of a source node to the capability of a target node. In order to be used in the right context only, a relationship can specify what type of capability (thus requirement) it can connects. "},{"title":"concat","baseurl":"","url":"/documentation/1.4.0/devops_guide/tosca_grammar/concat_definition.html","date":null,"categories":[],"body":"The concat function is used to concatenate two or more string values within a TOSCA service template. Use this function for attributes ## Keynames Keyname Type Required Description string_value_expressions_* list of string or string value expressions yes A list of one or more strings (or expressions that result in a string value) which can be concatenated together into a single string. Grammar concat : [ <string_value_expressions_*> ] Example The following example shows how to define an attribute using concat function: node_types : fastconnect.nodes.FunctionSample : properties : myName : type : string port : type : string attributes : url : { concat : [ \"http://\" , get_attribute : [ SELF , public_ip_address ], \":\" , get_property : [ SELF , port ]] } interfaces : [ ... ] "},{"title":"Concepts","baseurl":"","url":"/documentation/1.4.0/concepts/concepts.html","date":null,"categories":[],"body":"Let us introduce you to concepts in Alien 4 Cloud. Alien 4 Cloud is an application that allows people in the enterprise to collaborate in order to provide self-service deployment of complex applications taking in account the different experts through a role based portal. Alien 4 Cloud want to provide self-service no only for users but also for development teams that want to build and deploy complex applications and leverage cloud resources to deploy their environments in minutes. In order to provide enterprise self-service portal, alien 4 cloud leverages the following concept: Location : Deployment target (cloud or set of physical machines) Components : Software components to deploy Topologies (or blueprints): Description of multiple software components assembled together (to build an application). Applications : Actual applications to deploy with environments and versions each of them being associated with a topology. TOSCA : An emerging standard to describe service components and their relationships On top of these notions Alien 4 Cloud provide a comprehensive set of roles that can be mapped in flexible manners to the people and structure of an enterprise IT department. "},{"title":"Constraint clause","baseurl":"","url":"/documentation/1.4.0/devops_guide/tosca_grammar/constraints.html","date":null,"categories":[],"body":"A constraint clause defines an operation along with one or more compatible values that can be used to define a constraint on a property’s or parameter’s allowed values. Available constraints The following is the list of recognized operators (keynames) when defining constraint clauses: Operator Type Value type Description equal scalar all Constrains a property or parameter to a value equal to (‘=’) the value declared. greater_than scalar comparable Constrains a property or parameter to a value greater than (‘>’) the value declared. greater_or_equal scalar comparable Constrains a property or parameter to a value greater than or equal to (‘>=’) the value declared. less_than scalar comparable Constrains a property or parameter to a value less than (‘<’) the value declared. less_or_equal scalar comparable Constrains a property or parameter to a value less than or equal to (‘<=’) the value declared. in_range dual scalar comparable Constrains a property or parameter to a value in range of (inclusive) the two values declared. valid_values list all Constrains a property or parameter to a value that is in the list of declared values. length scalar string Constrains the property or parameter to a value of a given length. min_length scalar string Constrains the property or parameter to a value to a minimum length. max_length scalar string Constrains the property or parameter to a value to a maximum length. pattern regex string Constrains the property or parameter to a value that is allowed by the provided regular expression. The value type comparable refers to integer, float , timestamp and version types, while all refers to any type allowed in the TOSCA simple profile in YAML. Regular expression language in Alien 4 Cloud (not specified in TOSCA currently) is Java regex. Grammar # Scalar grammar <operator> : <scalar_value> # Dual scalar grammar <operator> : [ <scalar_value_1> , <scalar_value_2> ] # List grammar <operator> : [ <value_1> , <value_2> , ... , <value_n> ] # Regular expression (regex) grammar pattern : <regular_expression_value> Example The following example shows how to define a node type with constraints on properties: node_types : fastconnect.nodes.ConstraintSample : properties : property_1 : type : string constraints : - length : 6 property_2 : type : string constraints : - min_length : 4 - max_length : 8 property_3 : type : integer constraints : - in_range : [ 2 , 10 ] property_4 : type : integer constraints : - valid_values : [ 2 , 4 , 6 , 8 , 16 , 24 , 32 ] "},{"title":"Operations on Applications","baseurl":"","url":"/documentation/1.4.0/rest/controller_application-controller.html","date":null,"categories":[],"body":"Create a new application in the system. POST /rest/v1/applications Description If successfull returns a rest response with the id of the created application in data. If not successful a rest response with an error content is returned. Role required [ APPLICATIONS_MANAGER ]. By default the application creator will have application roles [APPLICATION_MANAGER, DEPLOYMENT_MANAGER] Parameters Type Name Description Required Schema Default BodyParameter request request true CreateApplicationRequest Responses HTTP Code Description Schema 201 Created RestResponse«string» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Search for applications POST /rest/v1/applications/search Description Returns a search result with that contains applications matching the request. A application is returned only if the connected user has at least one application role in [ APPLICATION_MANAGER APPLICATION_USER APPLICATION_DEVOPS DEPLOYMENT_MANAGER ] Parameters Type Name Description Required Schema Default BodyParameter searchRequest searchRequest true FilteredSearchRequest Responses HTTP Code Description Schema 200 OK RestResponse«FacetedSearchResult» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Get an application based from its id. GET /rest/v1/applications/{applicationId} Description Returns the application details. Application role required [ APPLICATION_MANAGER APPLICATION_USER APPLICATION_DEVOPS DEPLOYMENT_MANAGER ] Parameters Type Name Description Required Schema Default PathParameter applicationId applicationId true string Responses HTTP Code Description Schema 200 OK RestResponse«Application» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Updates by merging the given request into the given application . PUT /rest/v1/applications/{applicationId} Description The logged-in user must have the application manager role for this application. Application role required [ APPLICATION_MANAGER ] Parameters Type Name Description Required Schema Default PathParameter applicationId applicationId true string BodyParameter request request true UpdateApplicationRequest Responses HTTP Code Description Schema 200 OK RestResponse«Void» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Delete an application from its id. DELETE /rest/v1/applications/{applicationId} Description The logged-in user must have the application manager role for this application. Application role required [ APPLICATION_MANAGER ] Parameters Type Name Description Required Schema Default PathParameter applicationId applicationId true string Responses HTTP Code Description Schema 200 OK RestResponse«boolean» 401 Unauthorized No Content 204 No Content No Content 403 Forbidden No Content Consumes application/json Produces application/json Updates the image for the application. POST /rest/v1/applications/{applicationId}/image Description The logged-in user must have the application manager role for this application. Application role required [ APPLICATION_MANAGER ] Parameters Type Name Description Required Schema Default PathParameter applicationId applicationId true string FormDataParameter file file true file Responses HTTP Code Description Schema 200 OK RestResponse«string» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes multipart/form-data Produces application/json "},{"title":"Manage opertions on deployed application.","baseurl":"","url":"/documentation/1.4.0/rest/controller_application-deployment-controller.html","date":null,"categories":[],"body":"Deploys the application on the configured Cloud. POST /rest/v1/applications/deployment Description Application role required [ APPLICATION_MANAGER APPLICATION_DEVOPS ] and Application environment role required [ DEPLOYMENT_MANAGER ] Parameters Type Name Description Required Schema Default BodyParameter deployApplicationRequest deployApplicationRequest true DeployApplicationRequest Responses HTTP Code Description Schema 200 OK RestResponse«object» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Get all environments including their current deployment status for a list of applications. POST /rest/v1/applications/environments Description Return the environements for all given applications. Note that only environments the user is authorized to see are returned. Parameters Type Name Description Required Schema Default BodyParameter applicationIds applicationIds true string array Responses HTTP Code Description Schema 200 OK RestResponse«Map«string,Array«ApplicationEnvironmentDTO»»» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Deprecated Get the deployment status for the environements that the current user is allowed to see for a given application. POST /rest/v1/applications/statuses Description Returns the current status of an application list from the PaaS it is deployed on for all environments. Parameters Type Name Description Required Schema Default BodyParameter applicationIds applicationIds true string array Responses HTTP Code Description Schema 200 OK RestResponse«Map«string,Map«string,EnvironmentStatusDTO»»» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Get active deployment for the given application on the given cloud. GET /rest/v1/applications/{applicationId}/environments/{applicationEnvironmentId}/active-deployment Description Application role required [ APPLICATION_MANAGER APPLICATION_DEVOPS ] and Application environment role required [ DEPLOYMENT_MANAGER ] Parameters Type Name Description Required Schema Default PathParameter applicationId applicationId true string PathParameter applicationEnvironmentId applicationEnvironmentId true string Responses HTTP Code Description Schema 200 OK RestResponse«Deployment» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Un-Deploys the application on the configured PaaS. DELETE /rest/v1/applications/{applicationId}/environments/{applicationEnvironmentId}/deployment Description The logged-in user must have the [ APPLICATION_MANAGER ] role for this application. Application environment role required [ DEPLOYMENT_MANAGER ] Parameters Type Name Description Required Schema Default PathParameter applicationId applicationId true string PathParameter applicationEnvironmentId applicationEnvironmentId true string Responses HTTP Code Description Schema 200 OK RestResponse«Void» 401 Unauthorized No Content 204 No Content No Content 403 Forbidden No Content Consumes application/json Produces application/json Get detailed informations for every instances of every node of the application on the PaaS. GET /rest/v1/applications/{applicationId}/environments/{applicationEnvironmentId}/deployment/informations Description Application role required [ APPLICATION_MANAGER APPLICATION_DEVOPS ] and Application environment role required [ APPLICATION_USER DEPLOYMENT_MANAGER ] Parameters Type Name Description Required Schema Default PathParameter applicationId applicationId true string PathParameter applicationEnvironmentId applicationEnvironmentId true string Responses HTTP Code Description Schema 200 OK DeferredResult«RestResponse«Map«string,Map«string,InstanceInformation»»»» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json switchMaintenanceModeOff DELETE /rest/v1/applications/{applicationId}/environments/{applicationEnvironmentId}/deployment/maintenance Parameters Type Name Description Required Schema Default PathParameter applicationId applicationId true string PathParameter applicationEnvironmentId applicationEnvironmentId true string Responses HTTP Code Description Schema 200 OK RestResponse«Void» 401 Unauthorized No Content 204 No Content No Content 403 Forbidden No Content Consumes application/json Produces application/json switchMaintenanceModeOn POST /rest/v1/applications/{applicationId}/environments/{applicationEnvironmentId}/deployment/maintenance Parameters Type Name Description Required Schema Default PathParameter applicationId applicationId true string PathParameter applicationEnvironmentId applicationEnvironmentId true string Responses HTTP Code Description Schema 200 OK RestResponse«Void» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json switchInstanceMaintenanceModeOff DELETE /rest/v1/applications/{applicationId}/environments/{applicationEnvironmentId}/deployment/{nodeTemplateId}/{instanceId}/maintenance Parameters Type Name Description Required Schema Default PathParameter applicationId applicationId true string PathParameter applicationEnvironmentId applicationEnvironmentId true string PathParameter nodeTemplateId nodeTemplateId true string PathParameter instanceId instanceId true string Responses HTTP Code Description Schema 200 OK RestResponse«Void» 401 Unauthorized No Content 204 No Content No Content 403 Forbidden No Content Consumes application/json Produces application/json switchInstanceMaintenanceModeOn POST /rest/v1/applications/{applicationId}/environments/{applicationEnvironmentId}/deployment/{nodeTemplateId}/{instanceId}/maintenance Parameters Type Name Description Required Schema Default PathParameter applicationId applicationId true string PathParameter applicationEnvironmentId applicationEnvironmentId true string PathParameter nodeTemplateId nodeTemplateId true string PathParameter instanceId instanceId true string Responses HTTP Code Description Schema 200 OK RestResponse«Void» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Get last runtime (deployed) topology of an application or else get the current deployment topology for the environment. GET /rest/v1/applications/{applicationId}/environments/{applicationEnvironmentId}/runtime-topology Parameters Type Name Description Required Schema Default PathParameter applicationId Id of the application for which to get deployed topology. true string PathParameter applicationEnvironmentId Id of the environment for which to get deployed topology. true string Responses HTTP Code Description Schema 200 OK RestResponse«TopologyDTO» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Scale the application on a particular node. POST /rest/v1/applications/{applicationId}/environments/{applicationEnvironmentId}/scale/{nodeTemplateId} Description Returns the detailed informations of the application on the PaaS it is deployed. Application role required [ APPLICATION_MANAGER APPLICATION_DEVOPS ] and Application environment role required [ DEPLOYMENT_MANAGER ] Parameters Type Name Description Required Schema Default PathParameter applicationId applicationId true string PathParameter applicationEnvironmentId applicationEnvironmentId true string PathParameter nodeTemplateId nodeTemplateId true string QueryParameter instances instances true integer (int32) Responses HTTP Code Description Schema 200 OK DeferredResult«RestResponse«Void»» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Update the active deployment for the given application on the given cloud. POST /rest/v1/applications/{applicationId}/environments/{applicationEnvironmentId}/update-deployment Description Application role required [ APPLICATION_MANAGER APPLICATION_DEVOPS ] and Application environment role required [ DEPLOYMENT_MANAGER ] Parameters Type Name Description Required Schema Default PathParameter applicationId applicationId true string PathParameter applicationEnvironmentId applicationEnvironmentId true string Responses HTTP Code Description Schema 200 OK DeferredResult«RestResponse«Void»» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Launch a given workflow. POST /rest/v1/applications/{applicationId}/environments/{applicationEnvironmentId}/workflows/{workflowName} Parameters Type Name Description Required Schema Default PathParameter applicationId Application id. true string PathParameter applicationEnvironmentId Deployment id. true string PathParameter workflowName Workflow name. true string Responses HTTP Code Description Schema 200 OK DeferredResult«RestResponse«Void»» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json "},{"title":"Manages application's environments","baseurl":"","url":"/documentation/1.4.0/rest/controller_application-environment-controller.html","date":null,"categories":[],"body":"Create a new application environment POST /rest/v1/applications/{applicationId}/environments Description If successfull returns a rest response with the id of the created application environment in data. If not successful a rest response with an error content is returned. Role required [ APPLICATIONS_MANAGER ]By default the application environment creator will have application roles [ APPLICATION_MANAGER, DEPLOYMENT_MANAGER ] Parameters Type Name Description Required Schema Default PathParameter applicationId applicationId true string BodyParameter request request true ApplicationEnvironmentRequest Responses HTTP Code Description Schema 201 Created RestResponse«string» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Get a list of application environments, which has inputs for deployment, that can be copied when the new application topology version is bound to the environment POST /rest/v1/applications/{applicationId}/environments/input-candidates Parameters Type Name Description Required Schema Default PathParameter applicationId applicationId true string BodyParameter getInputCandidatesRequest getInputCandidatesRequest true GetInputCandidatesRequest Responses HTTP Code Description Schema 200 OK RestResponse«List«ApplicationEnvironment»» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Search for application environments POST /rest/v1/applications/{applicationId}/environments/search Description Returns a search result with that contains application environments DTO matching the request. A application environment is returned only if the connected user has at least one application environment role in [ APPLICATION_USER DEPLOYMENT_MANAGER ] Parameters Type Name Description Required Schema Default PathParameter applicationId applicationId true string BodyParameter searchRequest searchRequest true FilteredSearchRequest Responses HTTP Code Description Schema 200 OK RestResponse«GetMultipleDataResult«ApplicationEnvironmentDTO»» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Get an application environment from its id GET /rest/v1/applications/{applicationId}/environments/{applicationEnvironmentId} Description Returns the application environment. Roles required: Application environment [ APPLICATION_USER DEPLOYMENT_MANAGER ], or application [APPLICATION_MANAGER] Parameters Type Name Description Required Schema Default PathParameter applicationId applicationId true string PathParameter applicationEnvironmentId applicationEnvironmentId true string Responses HTTP Code Description Schema 200 OK RestResponse«ApplicationEnvironmentDTO» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Updates by merging the given request into the given application environment PUT /rest/v1/applications/{applicationId}/environments/{applicationEnvironmentId} Description The logged-in user must have the application manager role for this application. Application role required [ APPLICATION_MANAGER ] Parameters Type Name Description Required Schema Default PathParameter applicationId applicationId true string PathParameter applicationEnvironmentId applicationEnvironmentId true string BodyParameter request request true UpdateApplicationEnvironmentRequest Responses HTTP Code Description Schema 200 OK RestResponse«Void» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Delete an application environment from its id DELETE /rest/v1/applications/{applicationId}/environments/{applicationEnvironmentId} Description The logged-in user must have the application manager role for this application. Application role required [ APPLICATION_MANAGER ] Parameters Type Name Description Required Schema Default PathParameter applicationId applicationId true string PathParameter applicationEnvironmentId applicationEnvironmentId true string Responses HTTP Code Description Schema 200 OK RestResponse«boolean» 401 Unauthorized No Content 204 No Content No Content 403 Forbidden No Content Consumes application/json Produces application/json Get an application environment from its id GET /rest/v1/applications/{applicationId}/environments/{applicationEnvironmentId}/status Description Returns the application environment. Application role required [ APPLICATION_USER DEPLOYMENT_MANAGER ] Parameters Type Name Description Required Schema Default PathParameter applicationId applicationId true string PathParameter applicationEnvironmentId applicationEnvironmentId true string Responses HTTP Code Description Schema 200 OK RestResponse«string» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Deprecated: Get the id of the topology linked to the environment GET /rest/v1/applications/{applicationId}/environments/{applicationEnvironmentId}/topology Description Application role required [ APPLICATION_MANAGER APPLICATION_DEVOPS ] Parameters Type Name Description Required Schema Default PathParameter applicationId applicationId true string PathParameter applicationEnvironmentId applicationEnvironmentId true string Responses HTTP Code Description Schema 200 OK RestResponse«string» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Use new topology version for the given application environment PUT /rest/v1/applications/{applicationId}/environments/{applicationEnvironmentId}/topology-version Description The logged-in user must have the application manager role for this application. Application role required [ APPLICATION_MANAGER ] Parameters Type Name Description Required Schema Default PathParameter applicationId applicationId true string PathParameter applicationEnvironmentId applicationEnvironmentId true string BodyParameter request request true UpdateTopologyVersionForEnvironmentRequest Responses HTTP Code Description Schema 200 OK RestResponse«Void» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json "},{"title":"Manages application's environments","baseurl":"","url":"/documentation/1.4.0/rest/controller_application-environment-roles-controller.html","date":null,"categories":[],"body":"Add a role to a group on a specific application environment PUT /rest/v1/applications/{applicationId}/environments/{applicationEnvironmentId}/roles/groups/{groupId}/{role} Description Any user with application role APPLICATION_MANAGER can assign any role to a group of users. Application role required [ APPLICATION_MANAGER ] Parameters Type Name Description Required Schema Default PathParameter applicationEnvironmentId applicationEnvironmentId true string PathParameter groupId groupId true string PathParameter role role true string Responses HTTP Code Description Schema 200 OK RestResponse«Void» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Remove a role of a group on a specific application environment DELETE /rest/v1/applications/{applicationId}/environments/{applicationEnvironmentId}/roles/groups/{groupId}/{role} Description Any user with application role APPLICATION_MANAGER can un-assign any role to a group. Application role required [ APPLICATION_MANAGER ] Parameters Type Name Description Required Schema Default PathParameter applicationEnvironmentId applicationEnvironmentId true string PathParameter groupId groupId true string PathParameter role role true string Responses HTTP Code Description Schema 200 OK RestResponse«Void» 401 Unauthorized No Content 204 No Content No Content 403 Forbidden No Content Consumes application/json Produces application/json Add a role to a user on a specific application environment PUT /rest/v1/applications/{applicationId}/environments/{applicationEnvironmentId}/roles/users/{username}/{role} Description Any user with application role APPLICATION_MANAGER can assign any role to another user. Application role required [ APPLICATION_MANAGER ] Parameters Type Name Description Required Schema Default PathParameter applicationEnvironmentId applicationEnvironmentId true string PathParameter username username true string PathParameter role role true string Responses HTTP Code Description Schema 200 OK RestResponse«Void» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Remove a role to a user on a specific application environment DELETE /rest/v1/applications/{applicationId}/environments/{applicationEnvironmentId}/roles/users/{username}/{role} Description Any user with application role APPLICATION_MANAGER can unassign any role to another user. Application role required [ APPLICATION_MANAGER ] Parameters Type Name Description Required Schema Default PathParameter applicationEnvironmentId applicationEnvironmentId true string PathParameter username username true string PathParameter role role true string Responses HTTP Code Description Schema 200 OK RestResponse«Void» 401 Unauthorized No Content 204 No Content No Content 403 Forbidden No Content Consumes application/json Produces application/json "},{"title":"Manages application's deployment logs","baseurl":"","url":"/documentation/1.4.0/rest/controller_application-log-controller.html","date":null,"categories":[],"body":"Search for logs of a given deployment POST /rest/v1/applications/{applicationId}/environments/{applicationEnvironmentId}/logs/search Description Returns a search result with that contains logs matching the request. Parameters Type Name Description Required Schema Default PathParameter applicationId applicationId true string PathParameter applicationEnvironmentId applicationEnvironmentId true string BodyParameter searchRequest searchRequest true SearchLogRequest Responses HTTP Code Description Schema 200 OK RestResponse«FacetedSearchResult«PaaSDeploymentLog»» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Search for logs of all deployments for a given application POST /rest/v1/applications/{applicationId}/logs/search Description Returns a search result with that contains logs matching the request. Parameters Type Name Description Required Schema Default PathParameter applicationId applicationId true string BodyParameter searchRequest searchRequest true SearchLogRequest Responses HTTP Code Description Schema 200 OK RestResponse«FacetedSearchResult«PaaSDeploymentLog»» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json "},{"title":"Operations on Application's meta-properties","baseurl":"","url":"/documentation/1.4.0/rest/controller_application-meta-property-controller.html","date":null,"categories":[],"body":"upsertProperty POST /rest/v1/applications/{applicationId}/properties Parameters Type Name Description Required Schema Default PathParameter applicationId applicationId true string BodyParameter propertyRequest propertyRequest true Request to update or check the value of a property. Responses HTTP Code Description Schema 200 OK RestResponse«ConstraintInformation» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json "},{"title":"Operations on applications roles","baseurl":"","url":"/documentation/1.4.0/rest/controller_application-roles-controller.html","date":null,"categories":[],"body":"Add a role to a group on a specific application PUT /rest/v1/applications/{applicationId}/roles/groups/{groupId}/{role} Description Any user with application role APPLICATION_MANAGER can assign any role to a group of users. Application role required [ APPLICATION_MANAGER ] Parameters Type Name Description Required Schema Default PathParameter applicationId applicationId true string PathParameter groupId groupId true string PathParameter role role true string Responses HTTP Code Description Schema 200 OK RestResponse«Void» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Remove a role of a group on a specific application DELETE /rest/v1/applications/{applicationId}/roles/groups/{groupId}/{role} Description Any user with application role APPLICATION_MANAGER can un-assign any role to a group. Application role required [ APPLICATION_MANAGER ] Parameters Type Name Description Required Schema Default PathParameter applicationId applicationId true string PathParameter groupId groupId true string PathParameter role role true string Responses HTTP Code Description Schema 200 OK RestResponse«Void» 401 Unauthorized No Content 204 No Content No Content 403 Forbidden No Content Consumes application/json Produces application/json Add a role to a user on a specific application PUT /rest/v1/applications/{applicationId}/roles/users/{username}/{role} Description Any user with application role APPLICATION_MANAGER can assign any role to another user. Application role required [ APPLICATION_MANAGER ] Parameters Type Name Description Required Schema Default PathParameter applicationId applicationId true string PathParameter username username true string PathParameter role role true string Responses HTTP Code Description Schema 200 OK RestResponse«Void» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Remove a role to a user on a specific application DELETE /rest/v1/applications/{applicationId}/roles/users/{username}/{role} Description Any user with application role APPLICATION_MANAGER can unassign any role to another user. Application role required [ APPLICATION_MANAGER ] Parameters Type Name Description Required Schema Default PathParameter applicationId applicationId true string PathParameter username username true string PathParameter role role true string Responses HTTP Code Description Schema 200 OK RestResponse«Void» 401 Unauthorized No Content 204 No Content No Content 403 Forbidden No Content Consumes application/json Produces application/json "},{"title":"Operations on application's tags","baseurl":"","url":"/documentation/1.4.0/rest/controller_application-tags-controller.html","date":null,"categories":[],"body":"Update/Create a tag for the application. POST /rest/v1/applications/{applicationId}/tags Description The logged-in user must have the application manager role for this application. Application role required [ APPLICATION_MANAGER ] Parameters Type Name Description Required Schema Default PathParameter applicationId applicationId true string BodyParameter updateApplicationTagRequest updateApplicationTagRequest true UpdateTagRequest Responses HTTP Code Description Schema 200 OK RestResponse«Void» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Delete a tag for the application. DELETE /rest/v1/applications/{applicationId}/tags/{tagId} Description The logged-in user must have the application manager role for this application. Application role required [ APPLICATION_MANAGER ] Parameters Type Name Description Required Schema Default PathParameter applicationId applicationId true string PathParameter tagId tagId true string Responses HTTP Code Description Schema 200 OK RestResponse«Void» 401 Unauthorized No Content 204 No Content No Content 403 Forbidden No Content Consumes application/json Produces application/json "},{"title":"Manages application topology versions for a given application version","baseurl":"","url":"/documentation/1.4.0/rest/controller_application-topology-version-controller.html","date":null,"categories":[],"body":"Create a new application topology version POST /rest/v1/applications/{applicationId}/versions/{applicationVersionId}/topologyVersions Description The logged-in user must have the application manager role for this application. Application role required [ APPLICATION_MANAGER ] Parameters Type Name Description Required Schema Default PathParameter applicationId applicationId true string PathParameter applicationVersionId applicationVersionId true string BodyParameter request request true CreateApplicationTopologyVersionRequest Responses HTTP Code Description Schema 200 OK RestResponse«Void» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Delete an application topology version from its id DELETE /rest/v1/applications/{applicationId}/versions/{applicationVersionId}/topologyVersions/{topologyVersion} Description The logged-in user must have the application manager role for this application. Application role required [ APPLICATION_MANAGER ] Parameters Type Name Description Required Schema Default PathParameter applicationId applicationId true string PathParameter applicationVersionId applicationVersionId true string PathParameter topologyVersion topologyVersion true string Responses HTTP Code Description Schema 200 OK RestResponse«Void» 401 Unauthorized No Content 204 No Content No Content 403 Forbidden No Content Consumes application/json Produces application/json "},{"title":"Manages application's versions","baseurl":"","url":"/documentation/1.4.0/rest/controller_application-version-controller.html","date":null,"categories":[],"body":"Get the first snapshot application version for an application. GET /rest/v1/applications/{applicationId}/versions Description Return the first snapshot application version for an application. Application role required [ APPLICATION_MANAGER APPLICATION_USER APPLICATION_DEVOPS DEPLOYMENT_MANAGER ] Parameters Type Name Description Required Schema Default PathParameter applicationId applicationId true string Responses HTTP Code Description Schema 200 OK RestResponse«ApplicationVersion» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Create a new application version. POST /rest/v1/applications/{applicationId}/versions Description If successfull returns a rest response with the id of the created application version in data. If not successful a rest response with an error content is returned. Application role required [ APPLICATIONS_MANAGER ]. By default the application version creator will have application roles [APPLICATION_MANAGER] Parameters Type Name Description Required Schema Default PathParameter applicationId applicationId true string BodyParameter request request true CreateApplicationVersionRequest Responses HTTP Code Description Schema 201 Created RestResponse«string» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Search application versions POST /rest/v1/applications/{applicationId}/versions/search Description Returns a search result with that contains application versions matching the request. A application version is returned only if the connected user has at least one application role in [ APPLICATION_MANAGER APPLICATION_USER APPLICATION_DEVOPS DEPLOYMENT_MANAGER ] Parameters Type Name Description Required Schema Default PathParameter applicationId applicationId true string BodyParameter searchRequest searchRequest true FilteredSearchRequest Responses HTTP Code Description Schema 200 OK RestResponse«GetMultipleDataResult«ApplicationVersion»» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Get an application version based from its id. GET /rest/v1/applications/{applicationId}/versions/{applicationVersionId} Description Returns the application version details. Application role required [ APPLICATION_MANAGER APPLICATION_DEVOPS ] Parameters Type Name Description Required Schema Default PathParameter applicationId applicationId true string PathParameter applicationVersionId applicationVersionId true string Responses HTTP Code Description Schema 200 OK RestResponse«ApplicationVersion» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Updates by merging the given request into the given application version PUT /rest/v1/applications/{applicationId}/versions/{applicationVersionId} Description Updates by merging the given request into the given application version. Application role required [ APPLICATION_MANAGER ] Parameters Type Name Description Required Schema Default PathParameter applicationId applicationId true string PathParameter applicationVersionId applicationVersionId true string BodyParameter request request true UpdateApplicationVersionRequest Responses HTTP Code Description Schema 200 OK RestResponse«Void» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Delete an application version from its id DELETE /rest/v1/applications/{applicationId}/versions/{applicationVersionId} Description The logged-in user must have the application manager role for this application. Application role required [ APPLICATION_MANAGER ] Parameters Type Name Description Required Schema Default PathParameter applicationId applicationId true string PathParameter applicationVersionId applicationVersionId true string Responses HTTP Code Description Schema 200 OK RestResponse«boolean» 401 Unauthorized No Content 204 No Content No Content 403 Forbidden No Content Consumes application/json Produces application/json "},{"title":"Audit Controller","baseurl":"","url":"/documentation/1.4.0/rest/controller_audit-controller.html","date":null,"categories":[],"body":"Get audit configuration GET /rest/v1/audit/configuration Description Get the audit configuration object. Audit configuration is only accessible to user with role [ ADMIN ] Responses HTTP Code Description Schema 200 OK RestResponse«AuditConfigurationDTO» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Enable/Disable audit on a list of methods POST /rest/v1/audit/configuration/audited-methods Description Audit configuration update is only accessible to user with role [ ADMIN ] Parameters Type Name Description Required Schema Default BodyParameter methods methods true AuditedMethod array Responses HTTP Code Description Schema 200 OK RestResponse«Void» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Enable/Disable audit POST /rest/v1/audit/configuration/enabled Description Audit configuration update is only accessible to user with role [ ADMIN ] Parameters Type Name Description Required Schema Default QueryParameter enabled enabled true boolean Responses HTTP Code Description Schema 200 OK RestResponse«Void» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Reset the audit configuration POST /rest/v1/audit/configuration/reset Description Reset the audit configuration to its default state. Audit search is only accessible to user with role [ ADMIN ] Responses HTTP Code Description Schema 200 OK RestResponse«AuditConfigurationDTO» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Search for audit trace POST /rest/v1/audit/search Description Returns a search result with that contains auti traces matching the request. Audit search is only accessible to user with role [ ADMIN ] Parameters Type Name Description Required Schema Default BodyParameter searchRequest searchRequest true FilteredSearchRequest Responses HTTP Code Description Schema 200 OK RestResponse«FacetedSearchResult» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json "},{"title":"Auth Controller","baseurl":"","url":"/documentation/1.4.0/rest/controller_auth-controller.html","date":null,"categories":[],"body":"Get the current authentication status and user’s roles. GET /rest/v1/auth/status Description Return the current user’s status and it’s roles. Responses HTTP Code Description Schema 200 OK RestResponse«UserStatus» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json "},{"title":"Operations on CSARs","baseurl":"","url":"/documentation/1.4.0/rest/controller_cloud-service-archive-controller.html","date":null,"categories":[],"body":"Upload a csar zip file. POST /rest/v1/csars Parameters Type Name Description Required Schema Default QueryParameter workspace workspace false string FormDataParameter file file true file Responses HTTP Code Description Schema 200 OK RestResponse«CsarUploadResult» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes multipart/form-data Produces application/json Search for cloud service archives. POST /rest/v1/csars/search Parameters Type Name Description Required Schema Default BodyParameter searchRequest searchRequest true FilteredSearchRequest Responses HTTP Code Description Schema 200 OK RestResponse«FacetedSearchResult» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Get a CSAR given its id. GET /rest/v1/csars/{csarId} Description Returns a CSAR. Parameters Type Name Description Required Schema Default PathParameter csarId csarId true string Responses HTTP Code Description Schema 200 OK RestResponse«CsarInfoDTO» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Delete a CSAR given its id. DELETE /rest/v1/csars/{csarId} Parameters Type Name Description Required Schema Default PathParameter csarId csarId true string Responses HTTP Code Description Schema 200 OK RestResponse«List«Usage»» 401 Unauthorized No Content 204 No Content No Content 403 Forbidden No Content Consumes application/json Produces application/json Add dependency to the csar with given id. POST /rest/v1/csars/{csarId}/dependencies Parameters Type Name Description Required Schema Default PathParameter csarId csarId true string BodyParameter dependency dependency true CSARDependency Responses HTTP Code Description Schema 200 OK RestResponse«boolean» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json "},{"title":"Operations on Components","baseurl":"","url":"/documentation/1.4.0/rest/controller_component-controller.html","date":null,"categories":[],"body":"Get details for a component (tosca type) from it’s id (including archive hash). GET /rest/v1/components/element/{elementId}/version/{version} Parameters Type Name Description Required Schema Default PathParameter elementId elementId true string PathParameter version version true string QueryParameter toscaType toscaType false enum (NODE_TYPE, CAPABILITY_TYPE, RELATIONSHIP_TYPE, ARTIFACT_TYPE) Responses HTTP Code Description Schema 200 OK RestResponse«AbstractToscaType» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces / Get details for a component (tosca type). GET /rest/v1/components/element/{elementId}/versions Parameters Type Name Description Required Schema Default PathParameter elementId elementId true string QueryParameter toscaType toscaType false enum (NODE_TYPE, CAPABILITY_TYPE, RELATIONSHIP_TYPE, ARTIFACT_TYPE) Responses HTTP Code Description Schema 200 OK RestResponse«Array«CatalogVersionResult»» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces / Verify that a component (tosca element) exists in alien’s repository. POST /rest/v1/components/exist Parameters Type Name Description Required Schema Default BodyParameter checkElementExistRequest checkElementExistRequest true ElementFromArchiveRequest Responses HTTP Code Description Schema 200 OK RestResponse«boolean» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Get details for a component (tosca type). POST /rest/v1/components/getInArchives Parameters Type Name Description Required Schema Default BodyParameter checkElementExistRequest checkElementExistRequest true ElementFromArchiveRequest Responses HTTP Code Description Schema 200 OK RestResponse«AbstractToscaType» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Set the given node type as default for the given capability. POST /rest/v1/components/recommendation Parameters Type Name Description Required Schema Default BodyParameter recommendationRequest recommendationRequest true RecommendationRequest Responses HTTP Code Description Schema 200 OK RestResponse«NodeType» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Get details for an indexed node type.. GET /rest/v1/components/recommendation/{capability} Parameters Type Name Description Required Schema Default PathParameter capability capability true string Responses HTTP Code Description Schema 200 OK RestResponse«NodeType» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces / Search for components (tosca types) in alien. POST /rest/v1/components/search Parameters Type Name Description Required Schema Default BodyParameter searchRequest searchRequest true ComponentSearchRequest Responses HTTP Code Description Schema 200 OK RestResponse«FacetedSearchResult«AbstractToscaType»» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Remove a recommendation for a node type. POST /rest/v1/components/unflag Description If a node type is set as default for a given capability, you can remove this setting by calling this operation with the right request parameters. Parameters Type Name Description Required Schema Default BodyParameter recommendationRequest recommendationRequest true RecommendationRequest Responses HTTP Code Description Schema 200 OK RestResponse«NodeType» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Update or insert a tag for a component (tosca element). POST /rest/v1/components/{componentId}/tags Parameters Type Name Description Required Schema Default PathParameter componentId componentId true string BodyParameter updateTagRequest updateTagRequest true UpdateTagRequest Responses HTTP Code Description Schema 200 OK RestResponse«Void» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Delete a tag for a component (tosca element). DELETE /rest/v1/components/{componentId}/tags/{tagId} Parameters Type Name Description Required Schema Default PathParameter componentId componentId true string PathParameter tagId tagId true string Responses HTTP Code Description Schema 200 OK RestResponse«Void» 401 Unauthorized No Content 204 No Content No Content 403 Forbidden No Content Consumes application/json Produces application/json Get details for a component (tosca type) from it’s id (including archive hash). GET /rest/v1/components/{id} Parameters Type Name Description Required Schema Default PathParameter id id true string QueryParameter toscaType toscaType false enum (NODE_TYPE, CAPABILITY_TYPE, RELATIONSHIP_TYPE, ARTIFACT_TYPE) Responses HTTP Code Description Schema 200 OK RestResponse«AbstractToscaType» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces / "},{"title":"Operations on CSAR Git","baseurl":"","url":"/documentation/1.4.0/rest/controller_csar-git-controller.html","date":null,"categories":[],"body":"Search for TOSCA CSAR git repositories. GET /rest/v1/csarsgit Parameters Type Name Description Required Schema Default QueryParameter query Query text. false string QueryParameter from Query from the given index. false integer (int32) QueryParameter size Maximum number of results to retrieve. false integer (int32) Responses HTTP Code Description Schema 200 OK RestResponse«GetMultipleDataResult«CsarGitRepository»» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces / Create a new CSARGit from a Git location in ALIEN. POST /rest/v1/csarsgit Parameters Type Name Description Required Schema Default BodyParameter request request true Request for creation of a new csar git repository. Responses HTTP Code Description Schema 200 OK RestResponse«string» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces / Retrieve information on a registered TOSCA CSAR git repository. GET /rest/v1/csarsgit/{id} Parameters Type Name Description Required Schema Default PathParameter id Id of the csar git repository to get true string Responses HTTP Code Description Schema 200 OK RestResponse«CsarGitRepository» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces / Update a CSARGit by id. PUT /rest/v1/csarsgit/{id} Parameters Type Name Description Required Schema Default PathParameter id Id of the csar git repository to delete true string BodyParameter request request true Request for creation of a new csar git repository. Responses HTTP Code Description Schema 200 OK RestResponse«Void» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces / Delete a registered TOSCA CSAR git repository. DELETE /rest/v1/csarsgit/{id} Parameters Type Name Description Required Schema Default PathParameter id Id of the csar git repository to delete true string Responses HTTP Code Description Schema 200 OK RestResponse«Void» 401 Unauthorized No Content 204 No Content No Content 403 Forbidden No Content Consumes application/json Produces / Specify a CSAR from Git and proceed to its import in Alien. POST /rest/v1/csarsgit/{id} Parameters Type Name Description Required Schema Default PathParameter id id true string Responses HTTP Code Description Schema 200 OK RestResponse«List«ParsingResult«Csar»»» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces / "},{"title":"Operations on Deployments","baseurl":"","url":"/documentation/1.4.0/rest/controller_deployment-controller.html","date":null,"categories":[],"body":"Get 100 last deployments for an orchestrator. GET /rest/v1/deployments Parameters Type Name Description Required Schema Default QueryParameter orchestratorId Id of the orchestrator for which to get deployments. If not provided, get deployments for all orchestrators false string QueryParameter sourceId Id of the application for which to get deployments. if not provided, get deployments for all applications false string QueryParameter includeSourceSummary include or not the source (application or csar) summary in the results false boolean Responses HTTP Code Description Schema 200 OK RestResponse«List«DeploymentDTO»» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Get a list of deployments from their ids. POST /rest/v1/deployments/bulk/ids Parameters Type Name Description Required Schema Default BodyParameter deploymentIds deploymentIds true string array Responses HTTP Code Description Schema 200 OK JsonRawRestResponse 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json getEvents GET /rest/v1/deployments/{applicationEnvironmentId}/events Parameters Type Name Description Required Schema Default PathParameter applicationEnvironmentId Id of the environment for which to get events. true string QueryParameter from Query from the given index. false integer (int32) QueryParameter size Maximum number of results to retrieve. false integer (int32) Responses HTTP Code Description Schema 200 OK RestResponse«GetMultipleDataResult» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Get deployment status from its id. GET /rest/v1/deployments/{deploymentId}/status Parameters Type Name Description Required Schema Default PathParameter deploymentId Deployment id. true string Responses HTTP Code Description Schema 200 OK RestResponse«string» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Undeploy deployment from its id. GET /rest/v1/deployments/{deploymentId}/undeploy Parameters Type Name Description Required Schema Default PathParameter deploymentId Deployment id. true string Responses HTTP Code Description Schema 200 OK RestResponse«Void» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json "},{"title":"This api allows to perfom admin oriented requests on deployment events.","baseurl":"","url":"/documentation/1.4.0/rest/controller_deployment-events-controller.html","date":null,"categories":[],"body":"Get deployment status events from a given date. POST /rest/v1/deployments/events/status Description Batch processing oriented API to retrieve deployment status events. This API is not intended for frequent requests but can retrieve lot of data. Parameters Type Name Description Required Schema Default BodyParameter timedRequest timedRequest true TimedRequest Responses HTTP Code Description Schema 200 OK GetMultipleJsonResult 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Get deployment status events from a given date. GET /rest/v1/deployments/events/status/scroll Description Batch processing oriented API to retrieve deployment status events. This API is not intended for frequent requests but can retrieve lot of data. Parameters Type Name Description Required Schema Default QueryParameter scrollId scrollId true string Responses HTTP Code Description Schema 200 OK ScrollJsonResult 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Get deployment status events from a given date. POST /rest/v1/deployments/events/status/scroll Description Batch processing oriented API to retrieve deployment status events. This API is not intended for frequent requests but can retrieve lot of data. Parameters Type Name Description Required Schema Default BodyParameter timedRequest timedRequest true ScrollTimedRequest Responses HTTP Code Description Schema 200 OK ScrollJsonResult 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json "},{"title":"Prepare a topology to be deployed on a specific environment (location matching, node matching and inputs configuration).","baseurl":"","url":"/documentation/1.4.0/rest/controller_deployment-topology-controller.html","date":null,"categories":[],"body":"Get the deployment topology of an application given an environment. GET /rest/v1/applications/{appId}/environments/{environmentId}/deployment-topology Description Application role required [ APPLICATION_MANAGER APPLICATION_DEVOPS ] and Application environment role required [ DEPLOYMENT_MANAGER ] Parameters Type Name Description Required Schema Default PathParameter appId appId true string PathParameter environmentId environmentId true string Responses HTTP Code Description Schema 200 OK RestResponse«DeploymentTopologyDTO» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces / Updates by merging the given request into the given application’s deployment topology. PUT /rest/v1/applications/{appId}/environments/{environmentId}/deployment-topology Description Application role required [ APPLICATION_MANAGER APPLICATION_DEVOPS ] and Application environment role required [ DEPLOYMENT_MANAGER ] Parameters Type Name Description Required Schema Default PathParameter appId appId true string PathParameter environmentId environmentId true string BodyParameter updateRequest updateRequest true UpdateDeploymentTopologyRequest Responses HTTP Code Description Schema 200 OK RestResponse«object» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json updateDeploymentInputArtifact POST /rest/v1/applications/{appId}/environments/{environmentId}/deployment-topology/inputArtifacts/{inputArtifactId}/update Parameters Type Name Description Required Schema Default PathParameter appId appId true string PathParameter environmentId environmentId true string PathParameter inputArtifactId inputArtifactId true string BodyParameter artifact artifact true DeploymentArtifact Responses HTTP Code Description Schema 200 OK RestResponse«DeploymentTopologyDTO» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Upload input artifact. POST /rest/v1/applications/{appId}/environments/{environmentId}/deployment-topology/inputArtifacts/{inputArtifactId}/upload Description The logged-in user must have the application manager role for this application. Application role required [ APPLICATION_MANAGER DEPLOYMENT_MANAGER ] Parameters Type Name Description Required Schema Default PathParameter appId appId true string PathParameter environmentId environmentId true string PathParameter inputArtifactId inputArtifactId true string FormDataParameter file file true file Responses HTTP Code Description Schema 200 OK RestResponse«DeploymentTopologyDTO» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes multipart/form-data Produces application/json Set location policies for a deployment. Creates if not yet the {@link DeploymentTopology} object linked to this deployment. POST /rest/v1/applications/{appId}/environments/{environmentId}/deployment-topology/location-policies Description Application role required [ APPLICATION_MANAGER APPLICATION_DEVOPS ] and Application environment role required [ DEPLOYMENT_MANAGER ] Parameters Type Name Description Required Schema Default PathParameter appId Id of the application. true string PathParameter environmentId Id of the environment on which to set the location policies. true string BodyParameter request Location policies request body. true SetLocationPoliciesRequest Responses HTTP Code Description Schema 200 OK RestResponse«DeploymentTopologyDTO» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces / Substitute a specific node by the location resource template in the topology of an application given an environment. POST /rest/v1/applications/{appId}/environments/{environmentId}/deployment-topology/substitutions/{nodeId} Description Application role required [ APPLICATION_MANAGER APPLICATION_DEVOPS ] and Application environment role required [ DEPLOYMENT_MANAGER ] Parameters Type Name Description Required Schema Default PathParameter appId appId true string PathParameter environmentId environmentId true string PathParameter nodeId nodeId true string QueryParameter locationResourceTemplateId locationResourceTemplateId true string Responses HTTP Code Description Schema 200 OK RestResponse«DeploymentTopologyDTO» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces / Update substitution’s capability property. POST /rest/v1/applications/{appId}/environments/{environmentId}/deployment-topology/substitutions/{nodeId}/capabilities/{capabilityName}/properties Parameters Type Name Description Required Schema Default PathParameter appId appId true string PathParameter environmentId environmentId true string PathParameter nodeId nodeId true string PathParameter capabilityName capabilityName true string BodyParameter updateRequest updateRequest true UpdatePropertyRequest Responses HTTP Code Description Schema 200 OK RestResponse«object» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces / Update substitution’s property. POST /rest/v1/applications/{appId}/environments/{environmentId}/deployment-topology/substitutions/{nodeId}/properties Parameters Type Name Description Required Schema Default PathParameter appId appId true string PathParameter environmentId environmentId true string PathParameter nodeId nodeId true string BodyParameter updateRequest updateRequest true UpdatePropertyRequest Responses HTTP Code Description Schema 200 OK RestResponse«object» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces / "},{"title":"Endpoint Mvc Adapter","baseurl":"","url":"/documentation/1.4.0/rest/controller_endpoint-mvc-adapter.html","date":null,"categories":[],"body":"invoke GET /rest/admin/autoconfig Responses HTTP Code Description Schema 200 OK object 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json invoke GET /rest/admin/autoconfig.json Responses HTTP Code Description Schema 200 OK object 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json invoke GET /rest/admin/beans Responses HTTP Code Description Schema 200 OK object 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json invoke GET /rest/admin/beans.json Responses HTTP Code Description Schema 200 OK object 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json invoke GET /rest/admin/configprops Responses HTTP Code Description Schema 200 OK object 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json invoke GET /rest/admin/configprops.json Responses HTTP Code Description Schema 200 OK object 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json invoke GET /rest/admin/dump Responses HTTP Code Description Schema 200 OK object 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json invoke GET /rest/admin/dump.json Responses HTTP Code Description Schema 200 OK object 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json invoke GET /rest/admin/info Responses HTTP Code Description Schema 200 OK object 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json invoke GET /rest/admin/info.json Responses HTTP Code Description Schema 200 OK object 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json invoke GET /rest/admin/mappings Responses HTTP Code Description Schema 200 OK object 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json invoke GET /rest/admin/mappings.json Responses HTTP Code Description Schema 200 OK object 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json invoke GET /rest/admin/trace Responses HTTP Code Description Schema 200 OK object 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json invoke GET /rest/admin/trace.json Responses HTTP Code Description Schema 200 OK object 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json "},{"title":"Bulk API for application environments.","baseurl":"","url":"/documentation/1.4.0/rest/controller_environment-bulk-controller.html","date":null,"categories":[],"body":"Get a list of environment from their ids. POST /rest/v1/applications/environments/bulk/ids Parameters Type Name Description Required Schema Default BodyParameter deploymentIds deploymentIds true string array Responses HTTP Code Description Schema 200 OK JsonRawRestResponse 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json "},{"title":"Environment Mvc Endpoint","baseurl":"","url":"/documentation/1.4.0/rest/controller_environment-mvc-endpoint.html","date":null,"categories":[],"body":"invoke GET /rest/admin/env Responses HTTP Code Description Schema 200 OK object 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json invoke GET /rest/admin/env.json Responses HTTP Code Description Schema 200 OK object 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json value GET /rest/admin/env/{name} Parameters Type Name Description Required Schema Default PathParameter name name true string Responses HTTP Code Description Schema 200 OK object 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json "},{"title":"Generic Suggestion Controller","baseurl":"","url":"/documentation/1.4.0/rest/controller_generic-suggestion-controller.html","date":null,"categories":[],"body":"Create a suggestion entry POST /rest/v1/suggestions/ Parameters Type Name Description Required Schema Default BodyParameter request request true Creation request for a suggestion. Responses HTTP Code Description Schema 200 OK RestResponse«Void» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Initialize the default configured suggestions POST /rest/v1/suggestions/init Responses HTTP Code Description Schema 200 OK RestResponse«Void» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Get matched suggestions GET /rest/v1/suggestions/{suggestionId}/values Description Returns the matched suggestions. Parameters Type Name Description Required Schema Default PathParameter suggestionId suggestionId true string QueryParameter input input false string QueryParameter limit limit false integer (int32) Responses HTTP Code Description Schema 200 OK RestResponse«Array«string»» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Add new suggestion value PUT /rest/v1/suggestions/{suggestionId}/values/{value} Parameters Type Name Description Required Schema Default PathParameter suggestionId suggestionId true string PathParameter value value true string Responses HTTP Code Description Schema 200 OK RestResponse«Void» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json "},{"title":"Group Controller","baseurl":"","url":"/documentation/1.4.0/rest/controller_group-controller.html","date":null,"categories":[],"body":"Create a new group in ALIEN. POST /rest/v1/groups Parameters Type Name Description Required Schema Default BodyParameter request request true CreateGroupRequest Responses HTTP Code Description Schema 200 OK RestResponse«string» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Get multiple groups from their ids. POST /rest/v1/groups/getGroups Description Returns a rest response that contains the list of requested groups. Parameters Type Name Description Required Schema Default BodyParameter ids ids true string array Responses HTTP Code Description Schema 200 OK RestResponse«List«Group»» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Search for user’s registered in alien. POST /rest/v1/groups/search Parameters Type Name Description Required Schema Default BodyParameter searchRequest searchRequest true FilteredSearchRequest Responses HTTP Code Description Schema 200 OK RestResponse«GetMultipleDataResult» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Get a group based on it’s id. GET /rest/v1/groups/{groupId} Description Returns a rest response that contains the group’s details. Parameters Type Name Description Required Schema Default PathParameter groupId groupId true string Responses HTTP Code Description Schema 200 OK RestResponse«Group» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Update a group by merging the groupUpdateRequest into the existing group PUT /rest/v1/groups/{groupId} Parameters Type Name Description Required Schema Default PathParameter groupId groupId true string BodyParameter groupUpdateRequest groupUpdateRequest true UpdateGroupRequest Responses HTTP Code Description Schema 200 OK RestResponse«Void» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Delete an existing group from the repository. DELETE /rest/v1/groups/{groupId} Parameters Type Name Description Required Schema Default PathParameter groupId groupId true string Responses HTTP Code Description Schema 200 OK RestResponse«Void» 401 Unauthorized No Content 204 No Content No Content 403 Forbidden No Content Consumes application/json Produces application/json Add a role to a group. PUT /rest/v1/groups/{groupId}/roles/{role} Parameters Type Name Description Required Schema Default PathParameter groupId groupId true string PathParameter role role true string Responses HTTP Code Description Schema 200 OK RestResponse«Void» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Remove a role from a group. DELETE /rest/v1/groups/{groupId}/roles/{role} Parameters Type Name Description Required Schema Default PathParameter groupId groupId true string PathParameter role role true string Responses HTTP Code Description Schema 200 OK RestResponse«Void» 401 Unauthorized No Content 204 No Content No Content 403 Forbidden No Content Consumes application/json Produces application/json Add a user to a group. PUT /rest/v1/groups/{groupId}/users/{username} Parameters Type Name Description Required Schema Default PathParameter groupId groupId true string PathParameter username username true string Responses HTTP Code Description Schema 200 OK RestResponse«User» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Remove a user from a group. DELETE /rest/v1/groups/{groupId}/users/{username} Parameters Type Name Description Required Schema Default PathParameter groupId groupId true string PathParameter username username true string Responses HTTP Code Description Schema 200 OK RestResponse«User» 401 Unauthorized No Content 204 No Content No Content 403 Forbidden No Content Consumes application/json Produces application/json "},{"title":"Health Mvc Endpoint","baseurl":"","url":"/documentation/1.4.0/rest/controller_health-mvc-endpoint.html","date":null,"categories":[],"body":"invoke GET /rest/admin/health Parameters Type Name Description Required Schema Default BodyParameter principal principal false Principal Responses HTTP Code Description Schema 200 OK object 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json invoke PUT /rest/admin/health Parameters Type Name Description Required Schema Default BodyParameter principal principal false Principal Responses HTTP Code Description Schema 200 OK object 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json invoke DELETE /rest/admin/health Parameters Type Name Description Required Schema Default BodyParameter principal principal false Principal Responses HTTP Code Description Schema 200 OK object 401 Unauthorized No Content 204 No Content No Content 403 Forbidden No Content Consumes application/json Produces application/json invoke POST /rest/admin/health Parameters Type Name Description Required Schema Default BodyParameter principal principal false Principal Responses HTTP Code Description Schema 200 OK object 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json invoke PATCH /rest/admin/health Parameters Type Name Description Required Schema Default BodyParameter principal principal false Principal Responses HTTP Code Description Schema 200 OK object 401 Unauthorized No Content 204 No Content No Content 403 Forbidden No Content Consumes application/json Produces application/json invoke GET /rest/admin/health.json Parameters Type Name Description Required Schema Default BodyParameter principal principal false Principal Responses HTTP Code Description Schema 200 OK object 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json invoke PUT /rest/admin/health.json Parameters Type Name Description Required Schema Default BodyParameter principal principal false Principal Responses HTTP Code Description Schema 200 OK object 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json invoke DELETE /rest/admin/health.json Parameters Type Name Description Required Schema Default BodyParameter principal principal false Principal Responses HTTP Code Description Schema 200 OK object 401 Unauthorized No Content 204 No Content No Content 403 Forbidden No Content Consumes application/json Produces application/json invoke POST /rest/admin/health.json Parameters Type Name Description Required Schema Default BodyParameter principal principal false Principal Responses HTTP Code Description Schema 200 OK object 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json invoke PATCH /rest/admin/health.json Parameters Type Name Description Required Schema Default BodyParameter principal principal false Principal Responses HTTP Code Description Schema 200 OK object 401 Unauthorized No Content 204 No Content No Content 403 Forbidden No Content Consumes application/json Produces application/json "},{"title":"Heapdump Mvc Endpoint","baseurl":"","url":"/documentation/1.4.0/rest/controller_heapdump-mvc-endpoint.html","date":null,"categories":[],"body":"invoke GET /rest/admin/heapdump Parameters Type Name Description Required Schema Default QueryParameter live live false boolean Responses HTTP Code Description Schema 200 OK No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/octet-stream invoke GET /rest/admin/heapdump.json Parameters Type Name Description Required Schema Default QueryParameter live live false boolean Responses HTTP Code Description Schema 200 OK No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/octet-stream "},{"title":"Manages locations for a given orchestrator.","baseurl":"","url":"/documentation/1.4.0/rest/controller_location-controller.html","date":null,"categories":[],"body":"Get all locations for a given orchestrator. GET /rest/v1/orchestrators/{orchestratorId}/locations Parameters Type Name Description Required Schema Default PathParameter orchestratorId Id of the orchestrator for which to get all locations. false string Responses HTTP Code Description Schema 200 OK RestResponse«List«LocationDTO»» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Create a new location. POST /rest/v1/orchestrators/{orchestratorId}/locations Parameters Type Name Description Required Schema Default PathParameter orchestratorId Id of the orchestrator for which the location is defined. false string BodyParameter locationRequest Request for location creation true Request for creation of a new location. Responses HTTP Code Description Schema 201 Created RestResponse«string» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Get a location from it’s id. GET /rest/v1/orchestrators/{orchestratorId}/locations/{id} Parameters Type Name Description Required Schema Default PathParameter orchestratorId Id of the orchestrator for which the location is defined. false string PathParameter id Id of the location to get true string Responses HTTP Code Description Schema 200 OK RestResponse«LocationDTO» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Update the name of an existing location. PUT /rest/v1/orchestrators/{orchestratorId}/locations/{id} Parameters Type Name Description Required Schema Default PathParameter orchestratorId Id of the orchestrator for which the location is defined. false string PathParameter id Id of the location to update true string BodyParameter updateRequest Location update request, representing the fields to updates and their new values. true UpdateLocationRequest Responses HTTP Code Description Schema 200 OK RestResponse«Void» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Delete an existing location. DELETE /rest/v1/orchestrators/{orchestratorId}/locations/{id} Parameters Type Name Description Required Schema Default PathParameter orchestratorId Id of the orchestrator for which the location is defined. false string PathParameter id Id of the location to delete. true string Responses HTTP Code Description Schema 200 OK RestResponse«boolean» 401 Unauthorized No Content 204 No Content No Content 403 Forbidden No Content Consumes application/json Produces application/json "},{"title":"Update values for meta-properties associated with locations.","baseurl":"","url":"/documentation/1.4.0/rest/controller_location-meta-properties-controller.html","date":null,"categories":[],"body":"upsertMetaProperty POST /rest/v1/orchestrators/{orchestratorId}/locations/{locationId}/properties Parameters Type Name Description Required Schema Default PathParameter orchestratorId Id of the orchestrator for which the location is defined. false string PathParameter locationId Id of the location to get true string BodyParameter propertyRequest Id of the location to get true Request to update or check the value of a property. Responses HTTP Code Description Schema 200 OK RestResponse«ConstraintInformation» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json "},{"title":"Location resource security batch operations","baseurl":"","url":"/documentation/1.4.0/rest/controller_location-resources-batch-security-controller.html","date":null,"categories":[],"body":"Update applications/environments authorized to access the location resource POST /rest/v1/orchestrators/{orchestratorId}/locations/{locationId}/resources/security/environmentsPerApplication Description Only user with ADMIN role can update authorized applications/environments for the location. Parameters Type Name Description Required Schema Default PathParameter orchestratorId orchestratorId true string PathParameter locationId locationId true string BodyParameter request request true ApplicationEnvironmentAuthorizationUpdateRequest Responses HTTP Code Description Schema 200 OK RestResponse«Void» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Bulk api to grant/revoke permissions for multiple groups on multiple location resources. POST /rest/v1/orchestrators/{orchestratorId}/locations/{locationId}/resources/security/groups Description Only user with ADMIN role can grant access to a group. Parameters Type Name Description Required Schema Default PathParameter orchestratorId orchestratorId true string PathParameter locationId locationId true string BodyParameter request request true SubjectsAuthorizationRequest Responses HTTP Code Description Schema 200 OK RestResponse«Void» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Bulk api to grant/revoke permissions to multiple users on multiple location resources. POST /rest/v1/orchestrators/{orchestratorId}/locations/{locationId}/resources/security/users Description Only user with ADMIN role can grant access to another users. Parameters Type Name Description Required Schema Default PathParameter orchestratorId orchestratorId true string PathParameter locationId locationId true string BodyParameter request request true SubjectsAuthorizationRequest Responses HTTP Code Description Schema 200 OK RestResponse«Void» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json "},{"title":"Manages locations for a given orchestrator.","baseurl":"","url":"/documentation/1.4.0/rest/controller_location-resources-controller.html","date":null,"categories":[],"body":"Add resource template to a location. POST /rest/v1/orchestrators/{orchestratorId}/locations/{locationId}/resources Parameters Type Name Description Required Schema Default PathParameter orchestratorId Id of the orchestrator for which to add resource template. true string PathParameter locationId Id of the location of the orchestrator to add resource template. true string BodyParameter resourceTemplateRequest resourceTemplateRequest true Request for creation of a new location’s resource. Responses HTTP Code Description Schema 200 OK RestResponse«Contains a custom resource template with its location’s updated dependencies.» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Auto configure the resources, if the location configurator plugin provides a way for. GET /rest/v1/orchestrators/{orchestratorId}/locations/{locationId}/resources/auto-configure Parameters Type Name Description Required Schema Default PathParameter orchestratorId Id of the orchestrator for which to Auto configure the resources. true string PathParameter locationId Id of the location of the orchestrator to Auto configure the resources. true string Responses HTTP Code Description Schema 200 OK RestResponse«List«LocationResourceTemplate»» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Update location’s resource. PUT /rest/v1/orchestrators/{orchestratorId}/locations/{locationId}/resources/{id} Parameters Type Name Description Required Schema Default PathParameter orchestratorId Id of the orchestrator for which to update resource template. true string PathParameter locationId Id of the location of the orchestrator to update resource template. true string PathParameter id Id of the location’s resource. true string BodyParameter mergeRequest mergeRequest true Request to update a location resource. Responses HTTP Code Description Schema 200 OK RestResponse«Void» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Delete location’s resource. DELETE /rest/v1/orchestrators/{orchestratorId}/locations/{locationId}/resources/{id} Parameters Type Name Description Required Schema Default PathParameter orchestratorId Id of the orchestrator for which to delete resource template. true string PathParameter locationId Id of the location of the orchestrator to delete resource template. true string PathParameter id Id of the location’s resource. true string Responses HTTP Code Description Schema 200 OK RestResponse«Void» 401 Unauthorized No Content 204 No Content No Content 403 Forbidden No Content Consumes application/json Produces application/json Update location’s resource’s capability template capability property. POST /rest/v1/orchestrators/{orchestratorId}/locations/{locationId}/resources/{id}/template/capabilities/{capabilityName}/properties Parameters Type Name Description Required Schema Default PathParameter orchestratorId Id of the orchestrator for which to update resource template capability property. true string PathParameter locationId Id of the location of the orchestrator to update resource template capability property. true string PathParameter id Id of the location’s resource. true string PathParameter capabilityName Id of the location’s resource template capability. true string BodyParameter updateRequest updateRequest true Request to update a location resource template property. Responses HTTP Code Description Schema 200 OK RestResponse«ConstraintInformation» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Update location’s resource’s template property. POST /rest/v1/orchestrators/{orchestratorId}/locations/{locationId}/resources/{id}/template/properties Parameters Type Name Description Required Schema Default PathParameter orchestratorId Id of the orchestrator for which to update resource template property. true string PathParameter locationId Id of the location of the orchestrator to update resource template property. true string PathParameter id Id of the location’s resource. true string BodyParameter updateRequest updateRequest true Request to update a location resource template property. Responses HTTP Code Description Schema 200 OK RestResponse«ConstraintInformation» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json "},{"title":"Location resource security operations","baseurl":"","url":"/documentation/1.4.0/rest/controller_location-resources-security-controller.html","date":null,"categories":[],"body":"Revoke the application’s authorisation to access the location resource DELETE /rest/v1/orchestrators/{orchestratorId}/locations/{locationId}/resources/{resourceId}/security/applications/{applicationId} Description Only user with ADMIN role can revoke access to the location. Parameters Type Name Description Required Schema Default PathParameter orchestratorId orchestratorId true string PathParameter locationId locationId true string PathParameter applicationId applicationId true string PathParameter resourceId resourceId true string Responses HTTP Code Description Schema 200 OK RestResponse«Void» 401 Unauthorized No Content 204 No Content No Content 403 Forbidden No Content Consumes application/json Produces application/json List all applications/environments authorized to access the location resource GET /rest/v1/orchestrators/{orchestratorId}/locations/{locationId}/resources/{resourceId}/security/environmentsPerApplication Description Only user with ADMIN role can list authorized applications/environments for the location. Parameters Type Name Description Required Schema Default PathParameter orchestratorId orchestratorId true string PathParameter locationId locationId true string PathParameter resourceId resourceId true string Responses HTTP Code Description Schema 200 OK RestResponse«List«ApplicationEnvironmentAuthorizationDTO»» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Update applications/environments authorized to access the location resource POST /rest/v1/orchestrators/{orchestratorId}/locations/{locationId}/resources/{resourceId}/security/environmentsPerApplication Description Only user with ADMIN role can update authorized applications/environments for the location. Parameters Type Name Description Required Schema Default PathParameter orchestratorId orchestratorId true string PathParameter locationId locationId true string PathParameter resourceId resourceId true string BodyParameter request request true ApplicationEnvironmentAuthorizationUpdateRequest Responses HTTP Code Description Schema 200 OK RestResponse«Void» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json List all groups authorized to access the location GET /rest/v1/orchestrators/{orchestratorId}/locations/{locationId}/resources/{resourceId}/security/groups Description Only user with ADMIN role can list authorized groups to the location. Parameters Type Name Description Required Schema Default PathParameter orchestratorId orchestratorId true string PathParameter locationId locationId true string PathParameter resourceId resourceId true string Responses HTTP Code Description Schema 200 OK RestResponse«List«GroupDTO»» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Grant access to the location to the groups POST /rest/v1/orchestrators/{orchestratorId}/locations/{locationId}/resources/{resourceId}/security/groups Description Only user with ADMIN role can grant access to a group. Parameters Type Name Description Required Schema Default PathParameter orchestratorId orchestratorId true string PathParameter locationId locationId true string PathParameter resourceId resourceId true string BodyParameter groupIds groupIds true string array Responses HTTP Code Description Schema 200 OK RestResponse«List«GroupDTO»» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Revoke the group’s authorisation to access the location DELETE /rest/v1/orchestrators/{orchestratorId}/locations/{locationId}/resources/{resourceId}/security/groups/{groupId} Description Only user with ADMIN role can revoke access to the location. Parameters Type Name Description Required Schema Default PathParameter orchestratorId orchestratorId true string PathParameter locationId locationId true string PathParameter resourceId resourceId true string PathParameter groupId groupId true string Responses HTTP Code Description Schema 200 OK RestResponse«List«GroupDTO»» 401 Unauthorized No Content 204 No Content No Content 403 Forbidden No Content Consumes application/json Produces application/json List all users authorized to access the location resource GET /rest/v1/orchestrators/{orchestratorId}/locations/{locationId}/resources/{resourceId}/security/users Description Only user with ADMIN role can list authorized users to the location. Parameters Type Name Description Required Schema Default PathParameter orchestratorId orchestratorId true string PathParameter locationId locationId true string PathParameter resourceId resourceId true string Responses HTTP Code Description Schema 200 OK RestResponse«List«UserDTO»» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Grant access to the location’s resource to the users, send back the new authorised users list POST /rest/v1/orchestrators/{orchestratorId}/locations/{locationId}/resources/{resourceId}/security/users Description Only user with ADMIN role can grant access to another users. Parameters Type Name Description Required Schema Default PathParameter orchestratorId orchestratorId true string PathParameter locationId locationId true string PathParameter resourceId resourceId true string BodyParameter userNames userNames true string array Responses HTTP Code Description Schema 200 OK RestResponse«List«UserDTO»» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Revoke the user’s authorisation to access a location resource, send back the new authorised users list DELETE /rest/v1/orchestrators/{orchestratorId}/locations/{locationId}/resources/{resourceId}/security/users/{username} Description Only user with ADMIN role can revoke access to the location. Parameters Type Name Description Required Schema Default PathParameter orchestratorId orchestratorId true string PathParameter locationId locationId true string PathParameter resourceId resourceId true string PathParameter username username true string Responses HTTP Code Description Schema 200 OK RestResponse«List«UserDTO»» 401 Unauthorized No Content 204 No Content No Content 403 Forbidden No Content Consumes application/json Produces application/json "},{"title":"Orchestrator security operations","baseurl":"","url":"/documentation/1.4.0/rest/controller_location-security-controller.html","date":null,"categories":[],"body":"List all groups authorized to access the location GET /rest/v1/orchestrators/{orchestratorId}/locations/{locationId}/security/applications/search Description Only user with ADMIN role can list authorized applications/environments to the location. Parameters Type Name Description Required Schema Default PathParameter orchestratorId orchestratorId true string PathParameter locationId locationId true string QueryParameter query Text Query to search. false string QueryParameter from Query from the given index. false integer (int32) QueryParameter size Maximum number of results to retrieve. false integer (int32) Responses HTTP Code Description Schema 200 OK RestResponse«GetMultipleDataResult«ApplicationEnvironmentAuthorizationDTO»» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Revoke the application’s authorisation to access the location DELETE /rest/v1/orchestrators/{orchestratorId}/locations/{locationId}/security/applications/{applicationId} Description Only user with ADMIN role can revoke access to the location. Parameters Type Name Description Required Schema Default PathParameter orchestratorId orchestratorId true string PathParameter locationId locationId true string PathParameter applicationId applicationId true string Responses HTTP Code Description Schema 200 OK RestResponse«Void» 401 Unauthorized No Content 204 No Content No Content 403 Forbidden No Content Consumes application/json Produces application/json List all applications/environments authorized to access the location GET /rest/v1/orchestrators/{orchestratorId}/locations/{locationId}/security/environmentsPerApplication Description Only user with ADMIN role can list authorized applications/environments for the location. Parameters Type Name Description Required Schema Default PathParameter orchestratorId orchestratorId true string PathParameter locationId locationId true string Responses HTTP Code Description Schema 200 OK RestResponse«List«ApplicationEnvironmentAuthorizationDTO»» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Update applications/environments authorized to access the location POST /rest/v1/orchestrators/{orchestratorId}/locations/{locationId}/security/environmentsPerApplication Description Only user with ADMIN role can update authorized applications/environments for the location. Parameters Type Name Description Required Schema Default PathParameter orchestratorId orchestratorId true string PathParameter locationId locationId true string BodyParameter request request true ApplicationEnvironmentAuthorizationUpdateRequest Responses HTTP Code Description Schema 200 OK RestResponse«Void» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json List all groups authorized to access the location GET /rest/v1/orchestrators/{orchestratorId}/locations/{locationId}/security/groups Description Only user with ADMIN role can list authorized groups to the location. Parameters Type Name Description Required Schema Default PathParameter orchestratorId orchestratorId true string PathParameter locationId locationId true string Responses HTTP Code Description Schema 200 OK RestResponse«List«GroupDTO»» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Grant access to the location to the groups POST /rest/v1/orchestrators/{orchestratorId}/locations/{locationId}/security/groups Description Only user with ADMIN role can grant access to a group. Parameters Type Name Description Required Schema Default PathParameter orchestratorId orchestratorId true string PathParameter locationId locationId true string BodyParameter groupIds groupIds true string array Responses HTTP Code Description Schema 200 OK RestResponse«List«GroupDTO»» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json List all groups authorized to access the location GET /rest/v1/orchestrators/{orchestratorId}/locations/{locationId}/security/groups/search Description Only user with ADMIN role can list authorized groups to the location. Parameters Type Name Description Required Schema Default PathParameter orchestratorId orchestratorId true string PathParameter locationId locationId true string QueryParameter query Text Query to search. false string QueryParameter from Query from the given index. false integer (int32) QueryParameter size Maximum number of results to retrieve. false integer (int32) Responses HTTP Code Description Schema 200 OK RestResponse«GetMultipleDataResult«GroupDTO»» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Revoke the group’s authorisation to access the location DELETE /rest/v1/orchestrators/{orchestratorId}/locations/{locationId}/security/groups/{groupId} Description Only user with ADMIN role can revoke access to the location. Parameters Type Name Description Required Schema Default PathParameter orchestratorId orchestratorId true string PathParameter locationId locationId true string PathParameter groupId groupId true string Responses HTTP Code Description Schema 200 OK RestResponse«List«GroupDTO»» 401 Unauthorized No Content 204 No Content No Content 403 Forbidden No Content Consumes application/json Produces application/json List all users authorized to access the location GET /rest/v1/orchestrators/{orchestratorId}/locations/{locationId}/security/users Description Only user with ADMIN role can list authorized users to the location. Parameters Type Name Description Required Schema Default PathParameter orchestratorId orchestratorId true string PathParameter locationId locationId true string Responses HTTP Code Description Schema 200 OK RestResponse«List«UserDTO»» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Grant access to the location to the users, send back the new authorised users list POST /rest/v1/orchestrators/{orchestratorId}/locations/{locationId}/security/users Description Only user with ADMIN role can grant access to another users. Parameters Type Name Description Required Schema Default PathParameter orchestratorId orchestratorId true string PathParameter locationId locationId true string BodyParameter userNames userNames true string array Responses HTTP Code Description Schema 200 OK RestResponse«List«UserDTO»» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json List all users authorized to access the location GET /rest/v1/orchestrators/{orchestratorId}/locations/{locationId}/security/users/search Description Only user with ADMIN role can list authorized users to the location. Parameters Type Name Description Required Schema Default PathParameter orchestratorId orchestratorId true string PathParameter locationId locationId true string QueryParameter query Text Query to search. false string QueryParameter from Query from the given i*ndex. false integer (int32) QueryParameter size Maximum number of results to retrieve. false integer (int32) Responses HTTP Code Description Schema 200 OK RestResponse«GetMultipleDataResult«UserDTO»» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Revoke the user’s authorisation to access the location, send back the new authorised users list DELETE /rest/v1/orchestrators/{orchestratorId}/locations/{locationId}/security/users/{username} Description Only user with ADMIN role can revoke access to the location. Parameters Type Name Description Required Schema Default PathParameter orchestratorId orchestratorId true string PathParameter locationId locationId true string PathParameter username username true string Responses HTTP Code Description Schema 200 OK RestResponse«List«UserDTO»» 401 Unauthorized No Content 204 No Content No Content 403 Forbidden No Content Consumes application/json Produces application/json "},{"title":"Adminitration api for deployment logs access.","baseurl":"","url":"/documentation/1.4.0/rest/controller_log-controller.html","date":null,"categories":[],"body":"Search for logs of a given deployment POST /rest/v1/deployment/logs/search Description Returns a search result with that contains logs matching the request. Parameters Type Name Description Required Schema Default BodyParameter searchRequest searchRequest true SearchLogRequest Responses HTTP Code Description Schema 200 OK RestResponse«FacetedSearchResult«PaaSDeploymentLog»» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json "},{"title":"Allow to create/read/update/delete and search services linked to an application environment.","baseurl":"","url":"/documentation/1.4.0/rest/controller_managed-service-resource-controller.html","date":null,"categories":[],"body":"Get a service associated with an application environment. GET /rest/v1/applications/{applicationId}/environments/{environmentId}/services Parameters Type Name Description Required Schema Default PathParameter environmentId environmentId true string Responses HTTP Code Description Schema 200 OK RestResponse«Service.» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces / Delete the managed service resource associated with an application environment. DELETE /rest/v1/applications/{applicationId}/environments/{environmentId}/services Description The service can not be deleted if used by other resources. Parameters Type Name Description Required Schema Default PathParameter environmentId environmentId true string Responses HTTP Code Description Schema 200 OK RestResponse«Void» 401 Unauthorized No Content 204 No Content No Content 403 Forbidden No Content Consumes application/json Produces / Create a service from an application environment. POST /rest/v1/applications/{applicationId}/environments/{environmentId}/services Parameters Type Name Description Required Schema Default PathParameter environmentId environmentId true string BodyParameter createRequest Create service true Request for creation of a new service. Responses HTTP Code Description Schema 201 Created RestResponse«string» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces / Unbind the managed service resource from associated application environment. PATCH /rest/v1/applications/{applicationId}/environments/{environmentId}/services Description This operation will only stop the management of the service via the application environment. The Service will still be present in Alien4Cloud, and updatable via service api or admin ui. Parameters Type Name Description Required Schema Default PathParameter environmentId environmentId true string Responses HTTP Code Description Schema 200 OK RestResponse«Void» 401 Unauthorized No Content 204 No Content No Content 403 Forbidden No Content Consumes application/json Produces / "},{"title":"Metrics Mvc Endpoint","baseurl":"","url":"/documentation/1.4.0/rest/controller_metrics-mvc-endpoint.html","date":null,"categories":[],"body":"invoke GET /rest/admin/metrics Responses HTTP Code Description Schema 200 OK object 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json invoke GET /rest/admin/metrics.json Responses HTTP Code Description Schema 200 OK object 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json value GET /rest/admin/metrics/{name} Parameters Type Name Description Required Schema Default PathParameter name name true string Responses HTTP Code Description Schema 200 OK object 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json "},{"title":"Get and update orchestrator configuration.","baseurl":"","url":"/documentation/1.4.0/rest/controller_orchestrator-configuration-controller.html","date":null,"categories":[],"body":"Get an orchestrator configuration. GET /rest/v1/orchestrators/{id}/configuration Parameters Type Name Description Required Schema Default PathParameter id Id of the orchestrator to get true string Responses HTTP Code Description Schema 200 OK RestResponse«OrchestratorConfiguration» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Update the configuration for an orchestrator. PUT /rest/v1/orchestrators/{id}/configuration Parameters Type Name Description Required Schema Default PathParameter id Id of the orchestrator for which to update the configuration. true string BodyParameter configuration The configuration object for the orchestrator - Type depends of the selected orchestrator. true object Responses HTTP Code Description Schema 200 OK RestResponse«Void» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json "},{"title":"Manages orchestrators.","baseurl":"","url":"/documentation/1.4.0/rest/controller_orchestrator-controller.html","date":null,"categories":[],"body":"Search for orchestrators. GET /rest/v1/orchestrators Parameters Type Name Description Required Schema Default QueryParameter query Query text. false string QueryParameter connectedOnly If true only connected orchestrators will be retrieved. false boolean QueryParameter from Query from the given index. false integer (int32) QueryParameter size Maximum number of results to retrieve. false integer (int32) Responses HTTP Code Description Schema 200 OK RestResponse«GetMultipleDataResult«Orchestrator.»» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Create a new orchestrators. POST /rest/v1/orchestrators Parameters Type Name Description Required Schema Default BodyParameter orchestratorRequest Request for orchestrators creation true Request for creation of a new orchestrators. Responses HTTP Code Description Schema 201 Created RestResponse«string» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Get an orchestrators from it’s id. GET /rest/v1/orchestrators/{id} Parameters Type Name Description Required Schema Default PathParameter id Id of the orchestrator to get true string Responses HTTP Code Description Schema 200 OK RestResponse«Orchestrator.» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Update the name of an existing orchestrators. PUT /rest/v1/orchestrators/{id} Parameters Type Name Description Required Schema Default PathParameter id Id of the orchestrators to update. true string BodyParameter request Orchestrator update request, representing the fields to updates and their new values. true Orchestrator update request. Responses HTTP Code Description Schema 200 OK RestResponse«Void» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Delete an existing orchestrators. DELETE /rest/v1/orchestrators/{id} Parameters Type Name Description Required Schema Default PathParameter id Id of the orchestrators to delete. true string Responses HTTP Code Description Schema 200 OK RestResponse«Void» 401 Unauthorized No Content 204 No Content No Content 403 Forbidden No Content Consumes application/json Produces application/json Get information on the artifacts that an orchestrator can support. GET /rest/v1/orchestrators/{id}/artifacts-support Parameters Type Name Description Required Schema Default PathParameter id Id of the orchestrator for which to get artifact support informations true string Responses HTTP Code Description Schema 200 OK RestResponse«Array«string»» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Disable an orchestrator. Destroys the instance of the orchestrator connector. DELETE /rest/v1/orchestrators/{id}/instance Parameters Type Name Description Required Schema Default PathParameter id Id of the orchestrator to enable true string QueryParameter force This parameter is useful only when trying to disable the orchestrator, if deployments are performed using this orchestrator disable operation will fail unnless the force flag is true false boolean QueryParameter clearDeployments In case an orchestrator with deployment is forced to be disabled, the user may decide to mark all deployments managed by this orchestrator as ended. false boolean Responses HTTP Code Description Schema 200 OK RestResponse«List«Usage»» 401 Unauthorized No Content 204 No Content No Content 403 Forbidden No Content Consumes application/json Produces application/json Enable an orchestrator. Creates the instance of orchestrator if not already created. POST /rest/v1/orchestrators/{id}/instance Parameters Type Name Description Required Schema Default PathParameter id Id of the orchestrator to enable true string Responses HTTP Code Description Schema 200 OK RestResponse«Void» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Get information on the locations that an orchestrator can support. GET /rest/v1/orchestrators/{id}/locationsupport Parameters Type Name Description Required Schema Default PathParameter id Id of the orchestrator for which to get location support informations true string Responses HTTP Code Description Schema 200 OK RestResponse«LocationSupport» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json "},{"title":"Manages plugins.","baseurl":"","url":"/documentation/1.4.0/rest/controller_plugin-controller.html","date":null,"categories":[],"body":"Search for plugins registered in ALIEN. GET /rest/v1/plugins Parameters Type Name Description Required Schema Default QueryParameter query Query text. false string QueryParameter from Query from the given index. false integer (int32) QueryParameter size Maximum number of results to retrieve. false integer (int32) Responses HTTP Code Description Schema 200 OK RestResponse«GetMultipleDataResult«Plugin»» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Upload a plugin archive. POST /rest/v1/plugins Description Content of the zip file must be compliant with the expected alien 4 cloud plugin structure. Parameters Type Name Description Required Schema Default FormDataParameter file Zip file that contains the plugin. true file Responses HTTP Code Description Schema 200 OK RestResponse«Void» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes multipart/form-data Produces application/json Remove a plugin. DELETE /rest/v1/plugins/{pluginId} Description Remove a plugin (and unloads it if enabled). Note that if the plugin is used (deployment plugin for example) it won’t be disabled but will be marked as deprecated. In such situation an error code 350 is returned as part of the error and a list of plugin usages will be returned as part of the returned data. Role required [ ADMIN ] Parameters Type Name Description Required Schema Default PathParameter pluginId pluginId true string Responses HTTP Code Description Schema 200 OK RestResponse«List«PluginUsage»» 401 Unauthorized No Content 204 No Content No Content 403 Forbidden No Content Consumes application/json Produces application/json Get a plugin configuration object. GET /rest/v1/plugins/{pluginId}/config Description Retrieve a plugin configuration object. Role required [ ADMIN ] Parameters Type Name Description Required Schema Default PathParameter pluginId pluginId true string Responses HTTP Code Description Schema 200 OK RestResponse«object» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Save a configuration object for a plugin. POST /rest/v1/plugins/{pluginId}/config Description Save a configuration object for a plugin. Returns the newly saved configuration. Role required [ ADMIN ] Parameters Type Name Description Required Schema Default PathParameter pluginId pluginId true string BodyParameter configObjectRequest configObjectRequest true object Responses HTTP Code Description Schema 200 OK RestResponse«object» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Disable a plugin. GET /rest/v1/plugins/{pluginId}/disable Description Disable a plugin (and unloads it if enabled). Note that if the plugin is used (deployment plugin for example) it won’t be disabled but will be marked as deprecated. In such situation an error code 350 is returned as part of the error and a list of plugin usages will be returned as part of the returned data. Role required [ ADMIN ] Parameters Type Name Description Required Schema Default PathParameter pluginId pluginId true string Responses HTTP Code Description Schema 200 OK RestResponse«List«PluginUsage»» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Enable a plugin. GET /rest/v1/plugins/{pluginId}/enable Description Enable and load a plugin. Role required [ ADMIN ] Parameters Type Name Description Required Schema Default PathParameter pluginId pluginId true string Responses HTTP Code Description Schema 200 OK RestResponse«Void» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json "},{"title":"Portability Insights Controller","baseurl":"","url":"/documentation/1.4.0/rest/controller_portability-insights-controller.html","date":null,"categories":[],"body":"Get all the portability definitions. GET /rest/v1/portability/definitions Responses HTTP Code Description Schema 200 OK RestResponse«object» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json "},{"title":"Quick Search Controller","baseurl":"","url":"/documentation/1.4.0/rest/controller_quick-search-controller.html","date":null,"categories":[],"body":"Search for applications or tosca elements in ALIEN’s repository. POST /rest/v1/quicksearch Parameters Type Name Description Required Schema Default BodyParameter requestObject requestObject true BasicSearchRequest Responses HTTP Code Description Schema 200 OK RestResponse«GetMultipleDataResult» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Search for relationship types in ALIEN’s repository. POST /rest/v1/quicksearch/relationship_types Parameters Type Name Description Required Schema Default BodyParameter requestObject requestObject true BasicSearchRequest Responses HTTP Code Description Schema 200 OK RestResponse«GetMultipleDataResult» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json "},{"title":"Allow to create/list/delete a repository.","baseurl":"","url":"/documentation/1.4.0/rest/controller_repository-controller.html","date":null,"categories":[],"body":"Create a new repository. POST /rest/v1/repositories Parameters Type Name Description Required Schema Default BodyParameter createRequest Create repository true CreateRepositoryRequest Responses HTTP Code Description Schema 201 Created RestResponse«string» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Search for repositories POST /rest/v1/repositories/search Parameters Type Name Description Required Schema Default BodyParameter searchRequest searchRequest true FilteredSearchRequest Responses HTTP Code Description Schema 200 OK RestResponse«FacetedSearchResult» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Update the repository. PUT /rest/v1/repositories/{id} Parameters Type Name Description Required Schema Default PathParameter id Id of the repository to update true string BodyParameter updateRequest Request for repository update true UpdateRepositoryRequest Responses HTTP Code Description Schema 200 OK RestResponse«Void» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Delete a repository. DELETE /rest/v1/repositories/{id} Parameters Type Name Description Required Schema Default PathParameter id Id of the repository to update true string Responses HTTP Code Description Schema 200 OK RestResponse«Void» 401 Unauthorized No Content 204 No Content No Content 403 Forbidden No Content Consumes application/json Produces application/json "},{"title":"Allow to list all repository plugins (artifact resolver).","baseurl":"","url":"/documentation/1.4.0/rest/controller_repository-plugin-controller.html","date":null,"categories":[],"body":"Search for repository resolver plugins. GET /rest/v1/repository-plugins Responses HTTP Code Description Schema 200 OK RestResponse«List«RepositoryPluginComponent»» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json "},{"title":"Runtime Controller","baseurl":"","url":"/documentation/1.4.0/rest/controller_runtime-controller.html","date":null,"categories":[],"body":"Get non-natives node template of a topology. GET /rest/v1/runtime/{applicationId}/environment/{applicationEnvironmentId}/nonNatives Description Returns An map of non-natives {@link NodeTemplate}. Application role required [ APPLICATION_MANAGER DEPLOYMENT_MANAGER ] Parameters Type Name Description Required Schema Default PathParameter applicationId applicationId true string PathParameter applicationEnvironmentId applicationEnvironmentId true string Responses HTTP Code Description Schema 200 OK RestResponse«Map«string,NodeTemplate»» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Get runtime (deployed) topology of an application on a specific cloud. GET /rest/v1/runtime/{applicationId}/environment/{applicationEnvironmentId}/topology Parameters Type Name Description Required Schema Default PathParameter applicationId Id of the application for which to get deployed topology. true string PathParameter applicationEnvironmentId Id of the environment for which to get deployed topology. true string Responses HTTP Code Description Schema 200 OK RestResponse«TopologyDTO» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Trigger a custom command on a specific node template of a topology . POST /rest/v1/runtime/{applicationId}/operations Description Returns a response with no errors and the command response as data in success case. Application role required [ APPLICATION_MANAGER ] Parameters Type Name Description Required Schema Default PathParameter applicationId applicationId true string BodyParameter operationRequest operationRequest true OperationExecRequest Responses HTTP Code Description Schema 200 OK DeferredResult«RestResponse«object»» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json "},{"title":"Allow to create/read/update/delete and search services.","baseurl":"","url":"/documentation/1.4.0/rest/controller_service-resource-controller.html","date":null,"categories":[],"body":"List and iterate service resources. GET /rest/v1/services Description This API is a simple api to list (with iteration) the service resources. If you need to search with criterias please look at the advanced search API. Parameters Type Name Description Required Schema Default QueryParameter from Optional pagination start index. false integer (int32) 0 QueryParameter count Optional pagination element count (limited to 1000). false integer (int32) 100 Responses HTTP Code Description Schema 200 OK RestResponse«GetMultipleDataResult«Service.»» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces / Create a new service. POST /rest/v1/services Parameters Type Name Description Required Schema Default BodyParameter createRequest Create service true Request for creation of a new service. Responses HTTP Code Description Schema 201 Created RestResponse«string» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces / Search services. POST /rest/v1/services/adv/search Parameters Type Name Description Required Schema Default BodyParameter searchRequest searchRequest true SortedSearchRequest Responses HTTP Code Description Schema 200 OK RestResponse«GetMultipleDataResult«Service.»» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces / Get a service from it’s id. GET /rest/v1/services/{id} Parameters Type Name Description Required Schema Default PathParameter id Id of the service to get true string Responses HTTP Code Description Schema 200 OK RestResponse«Service.» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces / Update a service. PUT /rest/v1/services/{id} Description Alien managed services (through application deployment) cannot be updated via API. Parameters Type Name Description Required Schema Default PathParameter id Id of the service to update. true string BodyParameter request ServiceResource update request, representing the fields to updates and their new values. true UpdateServiceResourceRequest Responses HTTP Code Description Schema 200 OK RestResponse«ConstraintInformation» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces / Delete a service. DELETE /rest/v1/services/{id} Parameters Type Name Description Required Schema Default PathParameter id Id of the service to delete. true string Responses HTTP Code Description Schema 200 OK RestResponse«Void» 401 Unauthorized No Content 204 No Content No Content 403 Forbidden No Content Consumes application/json Produces / Patch a service. PATCH /rest/v1/services/{id} Description When the service is managed by alien (through application deployment) the only authorized patch are on location and authorizations. Parameters Type Name Description Required Schema Default PathParameter id Id of the service to update. true string BodyParameter request ServiceResource patch request, representing the fields to updates and their new values. true PatchServiceResourceRequest Responses HTTP Code Description Schema 200 OK RestResponse«ConstraintInformation» 401 Unauthorized No Content 204 No Content No Content 403 Forbidden No Content Consumes application/json Produces / "},{"title":"Allow to grant/revoke services authorizations","baseurl":"","url":"/documentation/1.4.0/rest/controller_service-security-controller.html","date":null,"categories":[],"body":"Revoke the application’s authorisation to access the service resource DELETE /rest/v1/services/{serviceId}/security/applications/{applicationId} Description Only user with ADMIN role can revoke access to the location. Parameters Type Name Description Required Schema Default PathParameter serviceId serviceId true string PathParameter applicationId applicationId true string Responses HTTP Code Description Schema 200 OK RestResponse«Void» 401 Unauthorized No Content 204 No Content No Content 403 Forbidden No Content Consumes application/json Produces application/json List all applications/environments authorized to access the service resource GET /rest/v1/services/{serviceId}/security/environmentsPerApplication Description Only user with ADMIN role can list authorized applications/environments for the location. Parameters Type Name Description Required Schema Default PathParameter serviceId serviceId true string Responses HTTP Code Description Schema 200 OK RestResponse«List«ApplicationEnvironmentAuthorizationDTO»» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Update applications/environments authorized to access the service resource POST /rest/v1/services/{serviceId}/security/environmentsPerApplication Description Only user with ADMIN role can update authorized applications/environments for the location. Parameters Type Name Description Required Schema Default PathParameter serviceId serviceId true string BodyParameter request request true ApplicationEnvironmentAuthorizationUpdateRequest Responses HTTP Code Description Schema 200 OK RestResponse«Void» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json List all groups authorized to access the service resource GET /rest/v1/services/{serviceId}/security/groups Description Only user with ADMIN role can list authorized groups to the location. Parameters Type Name Description Required Schema Default PathParameter serviceId serviceId true string Responses HTTP Code Description Schema 200 OK RestResponse«List«GroupDTO»» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Grant access to the service resource to the groups POST /rest/v1/services/{serviceId}/security/groups Description Only user with ADMIN role can grant access to a group. Parameters Type Name Description Required Schema Default PathParameter serviceId serviceId true string BodyParameter groupIds groupIds true string array Responses HTTP Code Description Schema 200 OK RestResponse«List«GroupDTO»» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Revoke the group’s authorisation to access the service resource DELETE /rest/v1/services/{serviceId}/security/groups/{groupId} Description Only user with ADMIN role can revoke access to the location. Parameters Type Name Description Required Schema Default PathParameter serviceId serviceId true string PathParameter groupId groupId true string Responses HTTP Code Description Schema 200 OK RestResponse«List«GroupDTO»» 401 Unauthorized No Content 204 No Content No Content 403 Forbidden No Content Consumes application/json Produces application/json List all users authorized to access the service resource GET /rest/v1/services/{serviceId}/security/users Description Only user with ADMIN role can list authorized users to the location. Parameters Type Name Description Required Schema Default PathParameter serviceId serviceId true string Responses HTTP Code Description Schema 200 OK RestResponse«List«UserDTO»» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Grant access to the service to the users, send back the new authorised users list POST /rest/v1/services/{serviceId}/security/users Description Only user with ADMIN role can grant access to another users. Parameters Type Name Description Required Schema Default PathParameter serviceId serviceId true string BodyParameter userNames userNames true string array Responses HTTP Code Description Schema 200 OK RestResponse«List«UserDTO»» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Revoke the user’s authorisation to access the service resource, send back the new authorised users list DELETE /rest/v1/services/{serviceId}/security/users/{username} Description Only user with ADMIN role can revoke access to the location. Parameters Type Name Description Required Schema Default PathParameter serviceId serviceId true string PathParameter username username true string Responses HTTP Code Description Schema 200 OK RestResponse«List«UserDTO»» 401 Unauthorized No Content 204 No Content No Content 403 Forbidden No Content Consumes application/json Produces application/json "},{"title":"Tag Configuration Controller","baseurl":"","url":"/documentation/1.4.0/rest/controller_tag-configuration-controller.html","date":null,"categories":[],"body":"Save tag configuration. POST /rest/v1/metaproperties Parameters Type Name Description Required Schema Default BodyParameter configuration configuration true MetaPropConfiguration Responses HTTP Code Description Schema 200 OK RestResponse«TagConfigurationSaveResponse» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Search for tag configurations registered in ALIEN. POST /rest/v1/metaproperties/search Parameters Type Name Description Required Schema Default BodyParameter request request true FilteredSearchRequest Responses HTTP Code Description Schema 200 OK RestResponse«FacetedSearchResult» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Get tag configuration. GET /rest/v1/metaproperties/{tagConfigurationId} Parameters Type Name Description Required Schema Default PathParameter tagConfigurationId tagConfigurationId true string Responses HTTP Code Description Schema 200 OK RestResponse«MetaPropConfiguration» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Remove tag configuration. DELETE /rest/v1/metaproperties/{tagConfigurationId} Parameters Type Name Description Required Schema Default PathParameter tagConfigurationId tagConfigurationId true string Responses HTTP Code Description Schema 200 OK RestResponse«Void» 401 Unauthorized No Content 204 No Content No Content 403 Forbidden No Content Consumes application/json Produces application/json "},{"title":"Topology Catalog Controller","baseurl":"","url":"/documentation/1.4.0/rest/controller_topology-catalog-controller.html","date":null,"categories":[],"body":"Search for topologies in the catalog. POST /rest/v1/catalog/topologies/search Parameters Type Name Description Required Schema Default BodyParameter searchRequest searchRequest true FilteredSearchRequest Responses HTTP Code Description Schema 200 OK RestResponse«FacetedSearchResult«Topology»» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Create a topology and register it in the catalog POST /rest/v1/catalog/topologies/template Parameters Type Name Description Required Schema Default BodyParameter createTopologyRequest createTopologyRequest true CreateTopologyRequest Responses HTTP Code Description Schema 200 OK RestResponse«string» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Get all the versions for a given archive (name) GET /rest/v1/catalog/topologies/{archiveName}/versions Parameters Type Name Description Required Schema Default PathParameter archiveName archiveName true string Responses HTTP Code Description Schema 200 OK RestResponse«Array«CatalogVersionResult»» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces / Get a specific topology from it’s id. GET /rest/v1/catalog/topologies/{id} Parameters Type Name Description Required Schema Default PathParameter id id true string Responses HTTP Code Description Schema 200 OK RestResponse«Topology» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces / "},{"title":"Topology Controller","baseurl":"","url":"/documentation/1.4.0/rest/controller_topology-controller.html","date":null,"categories":[],"body":"Retrieve a topology from it’s id. GET /rest/v1/topologies/{topologyId} Description Returns a topology with it’s details. Application role required [ APPLICATION_MANAGER APPLICATION_DEVOPS ] Parameters Type Name Description Required Schema Default PathParameter topologyId topologyId true string Responses HTTP Code Description Schema 200 OK RestResponse«TopologyDTO» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Check if a topology is valid or not. GET /rest/v1/topologies/{topologyId}/isvalid Description Returns true if valid, false if not. Application role required [ APPLICATION_MANAGER APPLICATION_DEVOPS ] Parameters Type Name Description Required Schema Default PathParameter topologyId topologyId true string QueryParameter environmentId environmentId false string Responses HTTP Code Description Schema 200 OK RestResponse«TopologyValidationResult» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json "},{"title":"Get matching options for a given topology.","baseurl":"","url":"/documentation/1.4.0/rest/controller_topology-location-matching-controller.html","date":null,"categories":[],"body":"Retrieve the list of locations on which the current user can deploy the topology. GET /rest/v1/topologies/{topologyId}/locations Parameters Type Name Description Required Schema Default PathParameter topologyId topologyId true string QueryParameter environmentId environmentId false string Responses HTTP Code Description Schema 200 OK RestResponse«List«ILocationMatch»» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json "},{"title":"Topology Portability Controller","baseurl":"","url":"/documentation/1.4.0/rest/controller_topology-portability-controller.html","date":null,"categories":[],"body":"Evaluate the portability of a topology. GET /rest/v1/portability/topology/{topologyId} Parameters Type Name Description Required Schema Default PathParameter topologyId Id of the orchestrator for which to update resource template property. true string Responses HTTP Code Description Schema 200 OK RestResponse«TopologyPortabilityInsight» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json "},{"title":"User Controller","baseurl":"","url":"/documentation/1.4.0/rest/controller_user-controller.html","date":null,"categories":[],"body":"Create a new user in ALIEN. POST /rest/v1/users Parameters Type Name Description Required Schema Default BodyParameter request request true CreateUserRequest Responses HTTP Code Description Schema 200 OK RestResponse«Void» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Get multiple users from their usernames. POST /rest/v1/users/getUsers Description Returns a rest response that contains the list of requested users. Parameters Type Name Description Required Schema Default BodyParameter usernames usernames true string array Responses HTTP Code Description Schema 200 OK RestResponse«List«User»» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Search for user’s registered in alien. POST /rest/v1/users/search Parameters Type Name Description Required Schema Default BodyParameter searchRequest searchRequest true UserSearchRequest Responses HTTP Code Description Schema 200 OK RestResponse«FacetedSearchResult» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Get a user based on it’s username. GET /rest/v1/users/{username} Description Returns a rest response that contains the user’s details. Parameters Type Name Description Required Schema Default PathParameter username username true string Responses HTTP Code Description Schema 200 OK RestResponse«User» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Update an user by merging the userUpdateRequest into the existing user PUT /rest/v1/users/{username} Parameters Type Name Description Required Schema Default PathParameter username username true string BodyParameter userUpdateRequest userUpdateRequest true UpdateUserRequest Responses HTTP Code Description Schema 200 OK RestResponse«Void» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Delete an existing user from the internal user’s repository. DELETE /rest/v1/users/{username} Parameters Type Name Description Required Schema Default PathParameter username username true string Responses HTTP Code Description Schema 200 OK RestResponse«Void» 401 Unauthorized No Content 204 No Content No Content 403 Forbidden No Content Consumes application/json Produces application/json Add a role to a user. PUT /rest/v1/users/{username}/roles/{role} Parameters Type Name Description Required Schema Default PathParameter username username true string PathParameter role role true string Responses HTTP Code Description Schema 200 OK RestResponse«Void» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Remove a role from a user. DELETE /rest/v1/users/{username}/roles/{role} Parameters Type Name Description Required Schema Default PathParameter username username true string PathParameter role role true string Responses HTTP Code Description Schema 200 OK RestResponse«Void» 401 Unauthorized No Content 204 No Content No Content 403 Forbidden No Content Consumes application/json Produces application/json "},{"title":"Operations on workspaces","baseurl":"","url":"/documentation/1.4.0/rest/controller_workspace-controller.html","date":null,"categories":[],"body":"Get workspaces that the current user has the right to upload to GET /rest/v1/workspaces Responses HTTP Code Description Schema 200 OK RestResponse«List«Workspace»» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Search for csars with workspaces information POST /rest/v1/workspaces/csars/search Parameters Type Name Description Required Schema Default BodyParameter searchRequest searchRequest true FilteredSearchRequest Responses HTTP Code Description Schema 200 OK RestResponse«FacetedSearchResult» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Calculate the impact of the promotion GET /rest/v1/workspaces/promotion-impact Parameters Type Name Description Required Schema Default QueryParameter csarName csarName true string QueryParameter csarVersion csarVersion true string QueryParameter targetWorkspace targetWorkspace true string Responses HTTP Code Description Schema 200 OK RestResponse«CSARPromotionImpact» 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Perform or accept the promotion POST /rest/v1/workspaces/promotions Parameters Type Name Description Required Schema Default BodyParameter promotionRequest promotionRequest true PromotionRequest Responses HTTP Code Description Schema 200 OK RestResponse«PromotionRequest» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json Search for promotion POST /rest/v1/workspaces/promotions/search Parameters Type Name Description Required Schema Default BodyParameter searchRequest searchRequest true FilteredSearchRequest Responses HTTP Code Description Schema 200 OK RestResponse«FacetedSearchResult» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json "},{"title":"Topology catalog with workspace.","baseurl":"","url":"/documentation/1.4.0/rest/controller_workspace-topology-catalog-controller.html","date":null,"categories":[],"body":"Create a topology and register it in the catalog POST /rest/v1/workspaces/topologies/template Parameters Type Name Description Required Schema Default BodyParameter createTopologyRequest createTopologyRequest true CreateTopologyRequest Responses HTTP Code Description Schema 200 OK RestResponse«string» 201 Created No Content 401 Unauthorized No Content 403 Forbidden No Content 404 Not Found No Content Consumes application/json Produces application/json "},{"title":"Dashboards","baseurl":"","url":"/documentation/1.4.0/user_guide/dashboard_plugin.html","date":null,"categories":[],"body":" Premium feature Dashboard plugin is a premium feature. Dashboard plugin objective is to provide more visibility among users like application manager and admin. The purpose is to provide an easy way to visualize occuring resources and manage it. This plugin add several screens in alien4Cloud and display information about how many nodes are deployed for an application, or for an orchestrator, etc.. It also collects important information for billing purpose like maximum nodes that has been deployed at a moment. How to use it ? Load it Like every alien’s plugin, dashboard plugin can be drag & drop in “Admin > plugins” section. Configure it Dashboard plugin has default configuration but you can define “instanceReports” frequency refreshment. Instance reports task collect information about nodes that are deployed or undeployed to create a global view. This property is a cron expression like for example “0 0/5 * * * *” (every 5 minutes). You can define frequency as you wish. Views Home view On home view we can find (from left to right) : Maximum number of nodes ever reached and when it has been reached A sunburst graph about nodes hierarchy A timeline graph about number of nodes deployed A sunburst graph about number of compute nodes by app, by Iaas Application view A new tab is available in “Application” view and give access to a timeline graph about number of resources deployed on a period. It displayed on line by type of nodes : alien.nodes.Compute, alien.nodes.Network and alien.nodes.BlockStorage. Orchestrator view A new tab is available in “orchestrator” view and give access to : - A timeline graph about number of resources deployed on a period. It displayed on line by type of nodes : alien.nodes.Compute, alien.nodes.Network and alien.nodes.BlockStorage. - A bar chart graph a resources currently running by type : alien.nodes.Compute, alien.nodes.Network and alien.nodes.BlockStorage. "},{"title":"Migrate from 1.3.x to 1.4.x","baseurl":"","url":"/documentation/1.4.0/admin_guide/data_migration.html","date":null,"categories":[],"body":" Before anything else Before migrating data, please make sure to backup your data first. How to backup Alien4Cloud Warnings Compatibility This guide can only used for migration from 1.3.x to 1.4.x . Plugins migration Please beware that if you have custom plugins imported in your alien4cloud instance, they will be discarded after migration. Therefore, you should re-upload them after the process is over. We do not guarantee the compatibility of those with the new Alien4cloud version. Orchestrators Orchestrators in alien4cloud are bound to orchestrator plugins. If you are using a custom orchestrator plugin, as stated above, it will discarded after the migration. Migrate from 1.3.x to 1.4.0 Download Migation tool The migration tool takes as input old data, and transform them to be complient with the new alien4cloud version. Concerning either Alien4Cloud or elasticsearch data, no copy or transfert is made, meaning the existing data are really transformed and modified. Therefore, to be able to run the new version of the product with the migrated data, make sure the two instances of Alien4Cloud are configured to use the same and identical data path . In addition, they have to be bind to the same elasticsearch cluster , or, if running in an embedded mode, the elasticsearch configurations must be the same in term of data paths . Alien4Cloud and ElasticSearch states We recommend to stop Alien4Cloud before performing the migration. ElasticSearch MUST be up and running . Alien4Cloud should be restarted once the process is completed. This is quite trivial to do when running in a classical production setup where elasticsearch process is independant from Alien4Cloud ( See advanced configuration for more details ). However, if running in an embedded configuration, you can’t stop Alien4Cloud without stopping ElasticSearch. Then, just make sure the plateform is not used during the process. In order to migrate Alien4Cloud you must download the migration tool and copy it on the machine where Alien is running (or anywhere which has access to Alien’s data folders). After unzipping the archive, the tool can be configured by editing the files in path_to_unzipped_tool/config config.yml elasticsearch : # Name of your elasticsearch cluster cluster_name : alien4cloud # Addresses of elasticsearch cluster nodes addresses : 129.185.67.37:9300,129.185.67.26:9300 # The poller polls elasticsearch to export data for migration poller : # The poller's scroll lease and batch size see https://www.elastic.co/guide/en/elasticsearch/guide/1.x/scan-scroll.html scroll : lease : 120 batch_size : 100 # Where elasticsearch data will be exported and store after transformation exporter.dir : /tmp/alien4cloud/migration/1.3/exported importer.dir : /tmp/alien4cloud/migration/1.3/toImport alien4cloud : # alien4cloud runtime directory. See \"directories.alien\" option in your alien4cloud config dir : /opt/alien4cloud/data # Uncomment me if you'd like to change cloudify url during the migration # new_cloudify_url: \"https://1.1.1.1\" perform migration From the root directory of the unzipped tool, perform a migration dry run with the command: ./migration-tool.sh -migrate_dry_run The command above will migrate the data without making any changes on your elasticsearch data. It’s a safe way to see if any error or warning happen during migration. If no WARN or ERROR message has been produced you can do the effective migration process. ./migration-tool.sh -migrate Start your new Alien4cloud configured properly, after migration cd /opt/alien4cloud/alien4cloud-premium/ ./alien4cloud.sh Verify that all plugins (excepts custom ones) have been re-uploaded properly else re-upload them. Refresh your browser by emptying its cache so that new plugins’ UI can be loaded. Normally with this procedure, you should have your Alien functional with new version 1.4.0. Migrate from 1.4.x to 1.4.3.1 The premium dist versions of Alien4Cloud 1.4.3.1 are packaged with the plugin alien4cloud-migration-plugin to perform an auto migration of datas contains in ES from 1.4.x to 1.4.3.1 at the first boot. Note : you need to disable services to make migration if you have used this feature before Alien 1.4.2. Standard migration procedure : stop Alien4Cloud process. install the new log application on each Cloudify manager machine. More informations. replace the old folder of Alien4Cloud Premium by the new version of Alien4Cloud Premium update the alien4Cloud-config.yaml . start Alien4Cloud on each orchestrator configuration, set the port of the new application log. More informations. on each orchestrator configuration, add a new import for your location, plugins/overrides/plugin-included.yaml if online or plugins/overrides/plugin-managed.yaml if offline. More informations. Migration of an Alien4Cloud HA In case of the migration of an Alien4Cloud HA : stop backup computes then the master. install the new log application on each Cloudify manager machine. More informations. copied your old Alien4Cloud folder. It’s necessary for your rollback and you will help to fill the new file configuration of Alien4Cloud. don’t touch the folder that is mounted to shared runtime, just replace the old folder of Alien4Cloud Premium by the new version of Alien4Cloud Premium. update the alien4Cloud-config.yaml on each Alien4Cloud. start the Alien4Cloud master and wait the end of migration. on each orchestrator configuration, set the port of the new application log. More informations. on each orchestrator configuration, add a new import for your location, plugins/overrides/plugin-included.yaml if online or plugins/overrides/plugin-managed.yaml if offline. More informations. start the backup computes. Note: migration plugin of Alien4Cloud checks that the migration is already done on the Alien4cloud master, so it’s normal if you notice that migration process is not launched on the backup computes. "},{"title":"Data type","baseurl":"","url":"/documentation/1.4.0/devops_guide/tosca_grammar/data_type.html","date":null,"categories":[],"body":"Keynames Keyname Required Type Description tosca_definitions_version derived_from no string An optional parent Data Type name the Data Type derives from. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 version (1) no version An optional version for the Entity Type definition. N.A. metadata (2) no map of string Defines a section used to declare additional metadata information. alien_dsl_1_3_0 tosca_simple_yaml_1_0 tags (2) no map of string Defines a section used to declare additional metadata information. alien_dsl_1_3_0 alien_dsl_1_2_0 description no string An optional description for the Data Type. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 constraints (3) no list of constraint clauses The optional list of sequenced constraint clauses for the Data Type. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 properties (3) no map of property definitions The optional list of property definitions that comprise the schema for a complex Data Type in TOSCA. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 (1) version at type level is defined in TOSCA but they are optional and there is no example on how it should be managed. We believe in alien4cloud that versions should be managed at the service template/archive level and dispatched to every elements defined in the service template/archive. (2) metadata appeared in TOSCA while alien4cloud already had tags supported, support for metadata keyword has been added in 1.3.1 version. note that if you specify both metadata and tags one may silently override the other (this should be avoided). (3) Constraints and properties are mutually exclusive. Constraints are used to extends primitive types by adding constraints to them while properties are used to defined complex types. Grammar <data_type_name> : derived_from : <existing_type_name> version : <version_number> metadata : <map of string> description : <datatype_description> constraints : - <type_constraints> properties : <property_definitions> Example Define a new complex datatype mytypes.phonenumber : description : my phone number datatype properties : countrycode : type : integer areacode : type : integer number : type : integer Define a new datatype that derives from existing type and extends it mytypes.phonenumber.extended : derived_from : mytypes.phonenumber description : custom phone number type that extends the basic phonenumber type properties : phone_description : type : string constraints : - max_length : 128 Extending a primitive type to add constraints mytypes.password : derived_from : string description : a password with min length constraints : - min_length : 8 "},{"title":"Data type","baseurl":"","url":"/documentation/1.4.0/devops_guide/normative_types/data_types.html","date":null,"categories":[],"body":"Credential The Credential type is a complex TOSCA data type used when describing authorization credentials used to access network accessible resources. Type URI : tosca.datatypes.Credential . Properties Name Required Type Description token yes string The required token used as a credential for authorization or access to a networked resource. user no string The optional user (name or ID) used for non-token based credentials. Example <some_tosca_entity> : properties : my_credential : type : Credential properties : user : myusername token : mypassword "},{"title":"Definitions","baseurl":"","url":"/documentation/1.4.0/rest/definitions_admin-api.html","date":null,"categories":[],"body":" "},{"title":"Definitions","baseurl":"","url":"/documentation/1.4.0/rest/definitions_admin-audit-api.html","date":null,"categories":[],"body":" FilteredSearchRequest Name Description Required Schema Default filters false object from false integer (int32) query false string size false integer (int32) RestResponse«Void» Name Description Required Schema Default error false RestError AuditConfigurationDTO Name Description Required Schema Default enabled false boolean methodsConfiguration false object Map«string,List«AuditedMethod»» RestError Name Description Required Schema Default code false integer (int32) message false string AuditedMethod Name Description Required Schema Default action false string category false string enabled false boolean method false string signature false string RestResponse«FacetedSearchResult» Name Description Required Schema Default data false FacetedSearchResult error false RestError RestResponse«AuditConfigurationDTO» Name Description Required Schema Default data false AuditConfigurationDTO error false RestError Map«string,Array«FacetedSearchFacet»» FacetedSearchResult Name Description Required Schema Default data false object array facets false object from false integer (int32) queryDuration false integer (int64) to false integer (int32) totalResults false integer (int64) types false string array Map«string,Array«string»» "},{"title":"Definitions","baseurl":"","url":"/documentation/1.4.0/rest/definitions_admin-metaproperties-api.html","date":null,"categories":[],"body":" RestResponse«TagConfigurationSaveResponse» Name Description Required Schema Default data false TagConfigurationSaveResponse error false RestError PropertyValue Name Description Required Schema Default definition false boolean value false object PropertyDefinition Name Description Required Schema Default constraints false PropertyConstraint array default false PropertyValue definition false boolean description false string password false boolean required false boolean suggestionId false string type false string RestResponse«MetaPropConfiguration» Name Description Required Schema Default data false MetaPropConfiguration error false RestError Map«string,Array«string»» MetaPropConfiguration Name Description Required Schema Default constraints false PropertyConstraint array default false PropertyValue definition false boolean description false string entrySchema false PropertyDefinition id false string name false string password false boolean required false boolean suggestionId false string target false string type false string PropertyConstraint FilteredSearchRequest Name Description Required Schema Default filters false object from false integer (int32) query false string size false integer (int32) RestResponse«Void» Name Description Required Schema Default error false RestError TagConfigurationSaveResponse Name Description Required Schema Default id false string validationErrors false TagConfigurationValidationError array RestError Name Description Required Schema Default code false integer (int32) message false string RestResponse«FacetedSearchResult» Name Description Required Schema Default data false FacetedSearchResult error false RestError Map«string,Array«FacetedSearchFacet»» TagConfigurationValidationError Name Description Required Schema Default error false string path false string FacetedSearchResult Name Description Required Schema Default data false object array facets false object from false integer (int32) queryDuration false integer (int64) to false integer (int32) totalResults false integer (int64) types false string array "},{"title":"Definitions","baseurl":"","url":"/documentation/1.4.0/rest/definitions_admin-orchestrator-api.html","date":null,"categories":[],"body":" RestResponse«OrchestratorConfiguration» Name Description Required Schema Default data false OrchestratorConfiguration error false RestError UserDTO Name Description Required Schema Default email false string firstName false string lastName false string username false string GroupDTO Name Description Required Schema Default description false string email false string id false string name false string RestResponse«List«Usage»» Name Description Required Schema Default data false Usage array error false RestError RequirementDefinition Name Description Required Schema Default capabilityName false string description false string id false string lowerBound false integer (int32) nodeFilter false NodeFilter nodeType false string relationshipType false string type false string upperBound false integer (int32) GetMultipleDataResult«UserDTO» Name Description Required Schema Default data false UserDTO array from false integer (int32) queryDuration false integer (int64) to false integer (int32) totalResults false integer (int64) types false string array Orchestrator update request. A request object to pass when updating an orchestrator. Contains updatable fields. a topology deployment. An orchestrator may manage one or multiple locations. Name Description Required Schema Default deploymentNamePattern false string name false string Map«string,DataType» FilterDefinition Name Description Required Schema Default properties false object RelationshipTemplate Name Description Required Schema Default artifacts false object attributes false object interfaces false object name false string properties false object requirementName false string requirementType false string target false string targetedCapabilityName false string type false string RestResponse«Void» Name Description Required Schema Default error false RestError Version Name Description Required Schema Default buildNumber false integer (int32) incrementalVersion false integer (int32) majorVersion false integer (int32) minorVersion false integer (int32) qualifier false string Request to update a location resource. Name Description Required Schema Default enabled Flag to know if the resource is available to be used for configuration or matching. false boolean name New name of the resource. false string LocationSupport Name Description Required Schema Default multipleLocations false boolean types false string array CapabilityDefinition Name Description Required Schema Default description false string id false string properties false object type false string upperBound false integer (int32) validSources false string array Request for creation of a new orchestrators. Name Description Required Schema Default name Name of the orchestrators (must be unique as this allow users to identify it). true string pluginBean Id of the element of the plugin to use to manage communication with the orchestrators (plugins may have multiple components). true string pluginId Id of the plugin to use to manage communication with the orchestrators. true string DeploymentArtifact Name Description Required Schema Default archiveName false string archiveVersion false string artifactName false string artifactRef false string artifactRepository false string artifactType false string deployPath false string description false string repositoryCredential false object repositoryName false string repositoryURL false string LocationDTO Name Description Required Schema Default location false Location resources false Contains the types and templates of elements configured for a given location. Map«string,FilterDefinition» UpdateLocationRequest Name Description Required Schema Default environmentType false string name false string Contains a custom resource template with its location’s updated dependencies. Name Description Required Schema Default newDependencies The location’s dependencies, which might have been updated when creating the resource template. false CSARDependency array resourceTemplate A custom configured resource template. false LocationResourceTemplate RestResponse«GetMultipleDataResult«Orchestrator.»» Name Description Required Schema Default data false GetMultipleDataResult«Orchestrator.» error false RestError RestResponse«Contains a custom resource template with its location’s updated dependencies.» Name Description Required Schema Default data false Contains a custom resource template with its location’s updated dependencies. error false RestError Capability Name Description Required Schema Default properties false object type false string Contains the types and templates of elements configured for a given location. Name Description Required Schema Default allNodeTypes Map that contains all node types. false object capabilityTypes Map that contains the capability types used by the configuration types or node types. false object configurationTemplates List of configuration templates already configured for the location. Usually abstract types. false LocationResourceTemplate array configurationTypes Map of node types id, node type used to configure a given location. false object dataTypes Map of data types id, data type used to configure the templates of on-demand resources in a location. false object nodeTemplates List of node templates already configured for the location. false LocationResourceTemplate array nodeTypes Map of node types id, node type used to configure the templates of on-demand resources in a location. false object onDemandTypes Map that contains the on demdand types. false object providedTypes List of recommended node types ID, e.g. defined at the orchestrator level false string array Orchestrator. An orchestrator is alien 4 cloud is a software engine that alien 4 cloud connects to in order to orchestrate a topology deployment. An orchestrator may manage one or multiple locations. Name Description Required Schema Default deploymentNamePattern false string id false string name false string pluginBean false string pluginId false string state false enum (DISABLED, CONNECTING, CONNECTED, DISCONNECTED) RestResponse«boolean» Name Description Required Schema Default data false boolean error false RestError Map«string,Capability» Request to update a location resource template property. Name Description Required Schema Default propertyName Name of the property to update. false string propertyValue Value of the property to update, the type must be equal to the type of the property that will be updated. false object PropertyValue Name Description Required Schema Default definition false boolean value false object Map«string,Interface» Map«string,RelationshipTemplate» Map«string,PropertyDefinition» Map«string,List«PropertyConstraint»» RestResponse«GetMultipleDataResult«GroupDTO»» Name Description Required Schema Default data false GetMultipleDataResult«GroupDTO» error false RestError RestResponse«Array«string»» Name Description Required Schema Default data false string array error false RestError ImplementationArtifact Name Description Required Schema Default archiveName false string archiveVersion false string artifactRef false string artifactRepository false string artifactType false string repositoryCredential false object repositoryName false string repositoryURL false string Requirement Name Description Required Schema Default properties false object type false string NodeTemplate Name Description Required Schema Default artifacts false object attributes false object capabilities false object groups false string array interfaces false object name false string portability false object properties false object relationships false object requirements false object type false string RestResponse«ConstraintInformation» Name Description Required Schema Default data false ConstraintInformation error false RestError NodeType Name Description Required Schema Default abstract false boolean alienScore false integer (int64) archiveName false string archiveVersion false string artifacts false object attributes false object capabilities false CapabilityDefinition array creationDate false string (date-time) defaultCapabilities false string array derivedFrom false string array description false string elementId false string id false string interfaces false object lastUpdateDate false string (date-time) nestedVersion false Version portability false object properties false object requirements false RequirementDefinition array substitutionTopologyId false string tags false Tag array workspace false string RestResponse«GetMultipleDataResult«ApplicationEnvironmentAuthorizationDTO»» Name Description Required Schema Default data false GetMultipleDataResult«ApplicationEnvironmentAuthorizationDTO» error false RestError Request for creation of a new location. Name Description Required Schema Default infrastructureType Type of the infrastructure of the new location. true string name Name of the location (must be unique for this orchestrator as this allow users to identify it). true string Application Name Description Required Schema Default creationDate false string (date-time) description false string groupRoles false object id false string imageId false string lastUpdateDate false string (date-time) metaProperties false object name false string tags false Tag array userRoles false object RestResponse«string» Name Description Required Schema Default data false string error false RestError Map«string,NodeType» Request for creation of a new location’s resource. Name Description Required Schema Default archiveName Archive name of the resource type. false string archiveVersion Archive version of the resource type. false string resourceName Name of the location’s resource. true string resourceType Type of the location’s resource. true string LocationResourceTemplate Name Description Required Schema Default applicationPermissions false object enabled false boolean environmentPermissions false object generated false boolean groupPermissions false object id false string locationId false string name false string portabilityDefinitions false object service false boolean template false NodeTemplate types false string array userPermissions false object RestResponse«List«GroupDTO»» Name Description Required Schema Default data false GroupDTO array error false RestError RestResponse«LocationDTO» Name Description Required Schema Default data false LocationDTO error false RestError Map«string,Operation» Operation Name Description Required Schema Default dependencies false DeploymentArtifact array description false string implementationArtifact false ImplementationArtifact inputParameters false object portability false object RestResponse«List«UserDTO»» Name Description Required Schema Default data false UserDTO array error false RestError SubjectsAuthorizationRequest Name Description Required Schema Default create false string array delete false string array resources false string array RestResponse«GetMultipleDataResult«UserDTO»» Name Description Required Schema Default data false GetMultipleDataResult«UserDTO» error false RestError PropertyConstraint IValue Name Description Required Schema Default definition false boolean RestError Name Description Required Schema Default code false integer (int32) message false string Map«string,Set«string»» ApplicationEnvironmentAuthorizationUpdateRequest Name Description Required Schema Default applicationsToAdd false string array applicationsToDelete false string array environmentsToAdd false string array environmentsToDelete false string array resources false string array DataType Name Description Required Schema Default abstract false boolean archiveName false string archiveVersion false string creationDate false string (date-time) deriveFromSimpleType false boolean derivedFrom false string array description false string elementId false string id false string lastUpdateDate false string (date-time) nestedVersion false Version properties false object tags false Tag array workspace false string RestResponse«List«LocationDTO»» Name Description Required Schema Default data false LocationDTO array error false RestError RestResponse«List«ApplicationEnvironmentAuthorizationDTO»» Name Description Required Schema Default data false ApplicationEnvironmentAuthorizationDTO array error false RestError Map«string,IValue» CapabilityType Name Description Required Schema Default abstract false boolean archiveName false string archiveVersion false string attributes false object creationDate false string (date-time) derivedFrom false string array description false string elementId false string id false string lastUpdateDate false string (date-time) nestedVersion false Version properties false object tags false Tag array validSources false string array workspace false string Usage Name Description Required Schema Default resourceId false string resourceName false string resourceType false string workspace false string GetMultipleDataResult«Orchestrator.» Name Description Required Schema Default data false Orchestrator. array from false integer (int32) queryDuration false integer (int64) to false integer (int32) totalResults false integer (int64) types false string array Map«string,Requirement» RestResponse«Orchestrator.» Name Description Required Schema Default data false Orchestrator. error false RestError ConstraintInformation Name Description Required Schema Default name false string path false string reference false object type false string value false string RestResponse«LocationSupport» Name Description Required Schema Default data false LocationSupport error false RestError RestResponse«List«LocationResourceTemplate»» Name Description Required Schema Default data false LocationResourceTemplate array error false RestError PropertyDefinition Name Description Required Schema Default constraints false PropertyConstraint array default false PropertyValue definition false boolean description false string password false boolean required false boolean suggestionId false string type false string Map«string,AbstractPropertyValue» ApplicationEnvironment Name Description Required Schema Default applicationId false string description false string environmentType false enum (OTHER, DEVELOPMENT, INTEGRATION_TESTS, USER_ACCEPTANCE_TESTS, PRE_PRODUCTION, PRODUCTION) groupRoles false object id false string name false string topologyVersion false string userRoles false object version false string OrchestratorConfiguration Name Description Required Schema Default configuration false object id false string GetMultipleDataResult«GroupDTO» Name Description Required Schema Default data false GroupDTO array from false integer (int32) queryDuration false integer (int64) to false integer (int32) totalResults false integer (int64) types false string array CSARDependency Name Description Required Schema Default hash false string name false string version false string Map«string,CapabilityType» Map«string,string» Map«string,DeploymentArtifact» ApplicationEnvironmentAuthorizationDTO Name Description Required Schema Default application false Application environments false ApplicationEnvironment array Request to update or check the value of a property. Name Description Required Schema Default definitionId Id of the property to set. true string value Value to set for the property. true string GetMultipleDataResult«ApplicationEnvironmentAuthorizationDTO» Name Description Required Schema Default data false ApplicationEnvironmentAuthorizationDTO array from false integer (int32) queryDuration false integer (int64) to false integer (int32) totalResults false integer (int64) types false string array Tag Name Description Required Schema Default name false string value false string AbstractPropertyValue Name Description Required Schema Default definition false boolean Interface Name Description Required Schema Default description false string operations false object type false string Location A location represents a cloud, a region of a cloud, a set of machines and resources.basically any location on which alien will be allowed to perform deployment. Locations are managed by orchestrators. Name Description Required Schema Default applicationPermissions false object creationDate false string (date-time) dependencies false CSARDependency array environmentPermissions false object environmentType false string groupPermissions false object id false string infrastructureType false string lastUpdateDate false string (date-time) metaProperties false object name false string orchestratorId false string userPermissions false object NodeFilter Name Description Required Schema Default capabilities false object properties false object "},{"title":"Definitions","baseurl":"","url":"/documentation/1.4.0/rest/definitions_admin-plugin-api.html","date":null,"categories":[],"body":" PluginDescriptor Name Description Required Schema Default componentDescriptors false Describe a component of a plugin (can be an IOrchestrator etc.). array configurationClass false string dependencies false string array description false string id false string name false string uiEntryPoint false string version false string RestResponse«object» Name Description Required Schema Default data false object error false RestError RestResponse«Void» Name Description Required Schema Default error false RestError PluginUsage Name Description Required Schema Default resourceId false string resourceName false string resourceType false string Describe a component of a plugin (can be an IOrchestrator etc.). Name Description Required Schema Default beanName Name of the component bean in the plugin spring context. false string description Description of the plugin. false string name Name of the plugin component. false string type Type of the plugin. false string RestError Name Description Required Schema Default code false integer (int32) message false string GetMultipleDataResult«Plugin» Name Description Required Schema Default data false Plugin array from false integer (int32) queryDuration false integer (int64) to false integer (int32) totalResults false integer (int64) types false string array RestResponse«List«PluginUsage»» Name Description Required Schema Default data false PluginUsage array error false RestError RestResponse«GetMultipleDataResult«Plugin»» Name Description Required Schema Default data false GetMultipleDataResult«Plugin» error false RestError Plugin Name Description Required Schema Default configurable false boolean descriptor false PluginDescriptor enabled false boolean esId false string id false string pluginPathId false string "},{"title":"Definitions","baseurl":"","url":"/documentation/1.4.0/rest/definitions_admin-user-api.html","date":null,"categories":[],"body":" Group Name Description Required Schema Default description false string email false string id false string name false string roles false string array users false string array User Name Description Required Schema Default accountNonExpired false boolean accountNonLocked false boolean credentialsNonExpired false boolean email false string enabled false boolean firstName false string groupRoles false string array groups false string array internalDirectory false boolean lastName false string password false string roles false string array username false string RestResponse«string» Name Description Required Schema Default data false string error false RestError RestResponse«List«User»» Name Description Required Schema Default data false User array error false RestError RestResponse«List«Group»» Name Description Required Schema Default data false Group array error false RestError RestResponse«User» Name Description Required Schema Default data false User error false RestError RestResponse«Group» Name Description Required Schema Default data false Group error false RestError Map«string,Array«string»» UpdateGroupRequest Name Description Required Schema Default description false string email false string name false string roles false string array users false string array FilteredSearchRequest Name Description Required Schema Default filters false object from false integer (int32) query false string size false integer (int32) RestResponse«Void» Name Description Required Schema Default error false RestError UserSearchRequest Name Description Required Schema Default from false integer (int32) group false string query false string size false integer (int32) RestError Name Description Required Schema Default code false integer (int32) message false string CreateGroupRequest Name Description Required Schema Default description false string email false string name false string roles false string array users false string array RestResponse«FacetedSearchResult» Name Description Required Schema Default data false FacetedSearchResult error false RestError Map«string,Array«FacetedSearchFacet»» RestResponse«GetMultipleDataResult» Name Description Required Schema Default data false GetMultipleDataResult error false RestError CreateUserRequest Name Description Required Schema Default email false string firstName false string lastName false string password false string roles false string array username false string GetMultipleDataResult Name Description Required Schema Default data false object array from false integer (int32) queryDuration false integer (int64) to false integer (int32) totalResults false integer (int64) types false string array UpdateUserRequest Name Description Required Schema Default email false string firstName false string lastName false string password false string roles false string array FacetedSearchResult Name Description Required Schema Default data false object array facets false object from false integer (int32) queryDuration false integer (int64) to false integer (int32) totalResults false integer (int64) types false string array "},{"title":"Definitions","baseurl":"","url":"/documentation/1.4.0/rest/definitions_applications-api.html","date":null,"categories":[],"body":" GetMultipleDataResult«ApplicationVersion» Name Description Required Schema Default data false ApplicationVersion array from false integer (int32) queryDuration false integer (int64) to false integer (int32) totalResults false integer (int64) types false string array Map«string,AbstractStep» UpdateApplicationVersionRequest Name Description Required Schema Default description false string version false string RestResponse«Map«string,Array«ApplicationEnvironmentDTO»»» Name Description Required Schema Default data false object error false RestError RestResponse«DeploymentTopologyDTO» Name Description Required Schema Default data false DeploymentTopologyDTO error false RestError DeploymentTopologyDTO Name Description Required Schema Default availableSubstitutions false Contains the types and templates of resources that can be substituted for a deployment. capabilityTypes false object dataTypes false object locationPolicies false object locationResourceTemplates false object nodeTypes false object relationshipTypes false object topology false DeploymentTopology validation false TopologyValidationResult Contains the types and templates of resources that can be substituted for a deployment. Name Description Required Schema Default availableSubstitutions Map of node id to list of available location resource templates’ id. false object substitutionTypes Location resources types contain types for the templates. false LocationResourceTypes substitutionsTemplates Map of location resource id to location resource template. false object Map«string,Array«string»» Map«string,DataType» PaaSDeploymentLog Name Description Required Schema Default content false string deploymentId false string deploymentPaaSId false string executionId false string id false string instanceId false string interfaceName false string level false enum (debug, info, warn, error) nodeId false string operationName false string timestamp false string (date-time) type false string workflowId false string FilterDefinition Name Description Required Schema Default properties false object FilteredSearchRequest Name Description Required Schema Default filters false object from false integer (int32) query false string size false integer (int32) RestResponse«Void» Name Description Required Schema Default error false RestError Map«string,Workflow» RestResponse«FacetedSearchResult«PaaSDeploymentLog»» Name Description Required Schema Default data false FacetedSearchResult«PaaSDeploymentLog» error false RestError Map«string,object» PropertyValue«Topology» Name Description Required Schema Default definition false boolean value false Topology CreateApplicationTopologyVersionRequest Request to set locations policies for a deployment. Name Description Required Schema Default applicationTopologyVersion Id of the application topology version to use to initialize this application topology versions. false string description Description for this specific variant of the topology for the application version. false string qualifier Qualifier string that allow having a distinct topology version for every Application Topology Version in an Application Version. false string topologyTemplateId Id of the topology template to use to initialize the application topology version that will be created with the new application version. false string Map«string,Map«string,InstanceInformation»» Capability Name Description Required Schema Default properties false object type false string FacetedSearchResult«PaaSDeploymentLog» Name Description Required Schema Default data false PaaSDeploymentLog array facets false object from false integer (int32) queryDuration false integer (int64) to false integer (int32) totalResults false integer (int64) types false string array PropertyValue Name Description Required Schema Default definition false boolean value false object Workflow Name Description Required Schema Default description false string errors false AbstractWorkflowError array hosts false string array name false string standard false boolean steps false object Map«string,Interface» Map«string,RelationshipTemplate» CreateApplicationVersionRequest Name Description Required Schema Default description false string fromVersionId Id of the application version to use to initialize all application topology versions. false string topologyTemplateId Id of the topology template to use to initialize the application topology version that will be created with the new application version. false string version true string Map«string,PropertyDefinition» UpdateDeploymentTopologyRequest Name Description Required Schema Default inputProperties false object providerDeploymentProperties false object ImplementationArtifact Name Description Required Schema Default archiveName false string archiveVersion false string artifactRef false string artifactRepository false string artifactType false string repositoryCredential false object repositoryName false string repositoryURL false string Requirement Name Description Required Schema Default properties false object type false string RestResponse«object» Name Description Required Schema Default data false object error false RestError TopologyDTO Name Description Required Schema Default archiveContentTree false TreeNode capabilityTypes false object dataTypes false object delegateType false string dependencyConflicts false DependencyConflictDTO array lastOperationIndex false integer (int32) nodeTypes false object operations false AbstractEditorOperation array relationshipTypes false object topology false Topology RestResponse«ConstraintInformation» Name Description Required Schema Default data false ConstraintInformation error false RestError Service. A service is something running somewhere, exposing capabilities and requirements, matchable in a topology in place of an abstract component. Name Description Required Schema Default applicationPermissions false object capabilitiesRelationshipTypes false object creationDate false string (date-time) dependency false CSARDependency deploymentId false string description false string environmentId false string environmentPermissions false object groupPermissions false object id false string lastUpdateDate false string (date-time) locationIds false string array name false string nestedVersion false Version nodeInstance false NodeInstance requirementsRelationshipTypes false object userPermissions false object version false string Application Name Description Required Schema Default creationDate false string (date-time) description false string groupRoles false object id false string imageId false string lastUpdateDate false string (date-time) metaProperties false object name false string tags false Tag array userRoles false object NodeInstance Name Description Required Schema Default attributeValues false object nodeTemplate false NodeTemplate typeVersion false string RestResponse«Application» Name Description Required Schema Default data false Application error false RestError DeferredResult«RestResponse«Map«string,Map«string,InstanceInformation»»»» Name Description Required Schema Default result false object setOrExpired false boolean RestResponse«string» Name Description Required Schema Default data false string error false RestError Map«string,NodeType» AbstractTask Name Description Required Schema Default code false enum (IMPLEMENT, IMPLEMENT_RELATIONSHIP, REPLACE, SATISFY_LOWER_BOUND, PROPERTIES, HA_INVALID, SCALABLE_CAPABILITY_INVALID, NODE_FILTER_INVALID, WORKFLOW_INVALID, INPUT_ARTIFACT_INVALID, ARTIFACT_INVALID, LOCATION_POLICY, LOCATION_UNAUTHORIZED, LOCATION_DISABLED, ORCHESTRATOR_PROPERTY, INPUT_PROPERTY, NODE_NOT_SUBSTITUTED, FORBIDDEN_OPERATION) EnvironmentStatusDTO Name Description Required Schema Default environmentName false string environmentStatus false enum (DEPLOYED, UNDEPLOYED, INIT_DEPLOYMENT, DEPLOYMENT_IN_PROGRESS, UPDATE_IN_PROGRESS, UPDATED, UNDEPLOYMENT_IN_PROGRESS, WARNING, FAILURE, UPDATE_FAILURE, UNKNOWN) Map«string,InstanceInformation» Map«string,Operation» SubstitutionMapping Name Description Required Schema Default capabilities false object requirements false object substitutionType false string LocationResourceTypes Name Description Required Schema Default allNodeTypes Map that contains all node types. false object capabilityTypes Map that contains the capability types used by the configuration types or node types. false object configurationTypes Map of node types id, node type used to configure a given location. false object dataTypes Map of data types id, data type used to configure the templates of on-demand resources in a location. false object nodeTypes Map of node types id, node type used to configure the templates of on-demand resources in a location. false object onDemandTypes Map that contains the on demdand types. false object providedTypes List of recommended node types ID, e.g. defined at the orchestrator level false string array Map«string,LocationResourceTemplate» RestError Name Description Required Schema Default code false integer (int32) message false string RestResponse«TopologyDTO» Name Description Required Schema Default data false TopologyDTO error false RestError Request for creation of a new service. Name Description Required Schema Default fromRuntime Create the service from the deployed topology of this environment? Thorws an error if the environement is not deployed. Default to false true boolean serviceName Name of the new service (must be unique for a given version). true string Map«string,EnvironmentStatusDTO» DataType Name Description Required Schema Default abstract false boolean archiveName false string archiveVersion false string creationDate false string (date-time) deriveFromSimpleType false boolean derivedFrom false string array description false string elementId false string id false string lastUpdateDate false string (date-time) nestedVersion false Version properties false object tags false Tag array workspace false string JsonRawRestResponse Name Description Required Schema Default data false string error false RestError RestResponse«Map«string,Map«string,InstanceInformation»»» Name Description Required Schema Default data false object error false RestError Map«string,IValue» Map«string,Requirement» UpdatePropertyRequest Name Description Required Schema Default propertyName false string propertyValue false object ConstraintInformation Name Description Required Schema Default name false string path false string reference false object type false string value false string SubstitutionTarget Name Description Required Schema Default nodeTemplateName false string serviceRelationshipType false string targetId false string PropertyDefinition Name Description Required Schema Default constraints false PropertyConstraint array default false PropertyValue«DeploymentTopology» definition false boolean description false string entrySchema false PropertyDefinition password false boolean required false boolean suggestionId false string type false string CreateApplicationRequest Name Description Required Schema Default archiveName false string description false string name false string topologyTemplateVersionId false string RestResponse«ApplicationEnvironmentDTO» Name Description Required Schema Default data false ApplicationEnvironmentDTO error false RestError Request to update or check the value of a property. Name Description Required Schema Default definitionId Id of the property to set. true string value Value to set for the property. true string Tag Name Description Required Schema Default name false string value false string AbstractPropertyValue Name Description Required Schema Default definition false boolean Interface Name Description Required Schema Default description false string operations false object type false string NodeFilter Name Description Required Schema Default capabilities false object properties false object SortConfiguration Name Description Required Schema Default ascending false boolean sortBy false string ApplicationVersion Name Description Required Schema Default applicationId false string description false string id false string nestedVersion false Version released false boolean topologyVersions false object version false string RelationshipType Name Description Required Schema Default abstract false boolean archiveName false string archiveVersion false string artifacts false object attributes false object creationDate false string (date-time) derivedFrom false string array description false string elementId false string id false string interfaces false object lastUpdateDate false string (date-time) nestedVersion false Version properties false object tags false Tag array validSources false string array validTargets false string array workspace false string Deployment Name Description Required Schema Default endDate false string (date-time) environmentId false string id false string locationIds false string array orchestratorDeploymentId false string orchestratorId false string serviceResourceIds false string array sourceId false string sourceName false string sourceType false enum (APPLICATION, CSAR) startDate false string (date-time) versionId false string workflowExecutions false object RequirementDefinition Name Description Required Schema Default capabilityName false string description false string id false string lowerBound false integer (int32) nodeFilter false NodeFilter nodeType false string relationshipType false string type false string upperBound false integer (int32) AbstractStep Name Description Required Schema Default followingSteps false string array name false string precedingSteps false string array ApplicationTopologyVersion Name Description Required Schema Default archiveId false string description false string qualifier false string RelationshipTemplate Name Description Required Schema Default artifacts false object attributes false object interfaces false object name false string properties false object requirementName false string requirementType false string target false string targetedCapabilityName false string type false string AbstractWorkflowError Version Name Description Required Schema Default buildNumber false integer (int32) incrementalVersion false integer (int32) majorVersion false integer (int32) minorVersion false integer (int32) qualifier false string PropertyValue«DeploymentTopology» Name Description Required Schema Default definition false boolean value false DeploymentTopology RestResponse«Map«string,Map«string,EnvironmentStatusDTO»»» Name Description Required Schema Default data false object error false RestError TreeNode Name Description Required Schema Default artifactId false string children false TreeNode array fullPath false string leaf false boolean name false string DependencyConflictDTO Name Description Required Schema Default dependency false string resolvedVersion false string source false string DeployApplicationRequest Name Description Required Schema Default applicationEnvironmentId false string applicationId false string Map«string,Array«FacetedSearchFacet»» CapabilityDefinition Name Description Required Schema Default description false string id false string properties false object type false string upperBound false integer (int32) validSources false string array UpdateApplicationEnvironmentRequest Name Description Required Schema Default currentVersionId false string description false string environmentType false enum (OTHER, DEVELOPMENT, INTEGRATION_TESTS, USER_ACCEPTANCE_TESTS, PRE_PRODUCTION, PRODUCTION) name false string DeploymentArtifact Name Description Required Schema Default archiveName false string archiveVersion false string artifactName false string artifactRef false string artifactRepository false string artifactType false string deployPath false string description false string repositoryCredential false object repositoryName false string repositoryURL false string Map«string,FilterDefinition» NodeGroup Name Description Required Schema Default index false integer (int32) members false string array name false string policies false AbstractPolicy array Map«string,Map«string,EnvironmentStatusDTO»» RestResponse«GetMultipleDataResult«ApplicationEnvironmentDTO»» Name Description Required Schema Default data false GetMultipleDataResult«ApplicationEnvironmentDTO» error false RestError Map«string,ApplicationTopologyVersion» DeploymentTopology Name Description Required Schema Default archiveName false string archiveVersion false string creationDate false string (date-time) dependencies false CSARDependency array deployed false boolean description false string empty false boolean environmentId false string groups false object id false string initialTopologyId false string inputArtifacts false object inputProperties false object inputs false object lastDeploymentTopologyUpdateDate false string (date-time) lastUpdateDate false string (date-time) locationDependencies false CSARDependency array locationGroups false object nestedVersion false Version nodeTemplates false object orchestratorId false string originalNodes false object outputAttributes false object outputCapabilityProperties false object outputProperties false object providerDeploymentProperties false object substitutedNodes false object substitutionMapping false SubstitutionMapping uploadedInputArtifacts false object versionId false string workflows false object workspace false string RestResponse«boolean» Name Description Required Schema Default data false boolean error false RestError Map«string,Capability» InstanceInformation Name Description Required Schema Default attributes false object instanceStatus false enum (SUCCESS, PROCESSING, FAILURE, MAINTENANCE) runtimeProperties false object state false string Topology Name Description Required Schema Default archiveName false string archiveVersion false string creationDate false string (date-time) dependencies false CSARDependency array description false string empty false boolean groups false object id false string inputArtifacts false object inputs false object lastUpdateDate false string (date-time) nestedVersion false Version nodeTemplates false object outputAttributes false object outputCapabilityProperties false object outputProperties false object substitutionMapping false SubstitutionMapping workflows false object workspace false string RestResponse«Service.» Name Description Required Schema Default data false Service. error false RestError Map«string,List«PropertyConstraint»» TopologyValidationResult Name Description Required Schema Default taskList false AbstractTask array valid false boolean warningList false AbstractTask array Map«string,Map«string,Set«string»»» NodeTemplate Name Description Required Schema Default artifacts false object attributes false object capabilities false object groups false string array interfaces false object name false string portability false object properties false object relationships false object requirements false object type false string UpdateApplicationRequest Name Description Required Schema Default description false string name false string Map«string,PropertyValue» NodeType Name Description Required Schema Default abstract false boolean alienScore false integer (int64) archiveName false string archiveVersion false string artifacts false object attributes false object capabilities false CapabilityDefinition array creationDate false string (date-time) defaultCapabilities false string array derivedFrom false string array description false string elementId false string id false string interfaces false object lastUpdateDate false string (date-time) nestedVersion false Version portability false object properties false object requirements false RequirementDefinition array substitutionTopologyId false string tags false Tag array workspace false string UpdateTopologyVersionForEnvironmentRequest Name Description Required Schema Default environmentToCopyInput false string newTopologyVersion false string Map«string,NodeGroup» LocationResourceTemplate Name Description Required Schema Default applicationPermissions false object enabled false boolean environmentPermissions false object generated false boolean groupPermissions false object id false string locationId false string name false string portabilityDefinitions false object service false boolean template false NodeTemplate types false string array userPermissions false object RestResponse«Deployment» Name Description Required Schema Default data false Deployment error false RestError GetInputCandidatesRequest Name Description Required Schema Default applicationEnvironmentId false string applicationTopologyVersion false string UpdateTagRequest Name Description Required Schema Default tagKey false string tagValue false string Operation Name Description Required Schema Default dependencies false DeploymentArtifact array description false string implementationArtifact false ImplementationArtifact inputParameters false object portability false object RestResponse«ApplicationVersion» Name Description Required Schema Default data false ApplicationVersion error false RestError Map«string,SubstitutionTarget» PropertyConstraint IValue Name Description Required Schema Default definition false boolean ApplicationEnvironmentDTO Name Description Required Schema Default applicationId false string currentVersionName false string deployedVersion false string description false string environmentType false enum (OTHER, DEVELOPMENT, INTEGRATION_TESTS, USER_ACCEPTANCE_TESTS, PRE_PRODUCTION, PRODUCTION) groupRoles false object id false string name false string status false enum (DEPLOYED, UNDEPLOYED, INIT_DEPLOYMENT, DEPLOYMENT_IN_PROGRESS, UPDATE_IN_PROGRESS, UPDATED, UNDEPLOYMENT_IN_PROGRESS, WARNING, FAILURE, UPDATE_FAILURE, UNKNOWN) userRoles false object Map«string,Set«string»» AbstractPolicy Name Description Required Schema Default name false string type false string FacetedSearchResult Name Description Required Schema Default data false object array facets false object from false integer (int32) queryDuration false integer (int64) to false integer (int32) totalResults false integer (int64) types false string array ApplicationEnvironmentRequest Name Description Required Schema Default description false string environmentType false enum (OTHER, DEVELOPMENT, INTEGRATION_TESTS, USER_ACCEPTANCE_TESTS, PRE_PRODUCTION, PRODUCTION) inputCandidate false string name false string versionId false string RestResponse«List«ApplicationEnvironment»» Name Description Required Schema Default data false ApplicationEnvironment array error false RestError CapabilityType Name Description Required Schema Default abstract false boolean archiveName false string archiveVersion false string attributes false object creationDate false string (date-time) derivedFrom false string array description false string elementId false string id false string lastUpdateDate false string (date-time) nestedVersion false Version properties false object tags false Tag array validSources false string array workspace false string RestResponse«GetMultipleDataResult«ApplicationVersion»» Name Description Required Schema Default data false GetMultipleDataResult«ApplicationVersion» error false RestError DeferredResult«RestResponse«Void»» Name Description Required Schema Default result false object setOrExpired false boolean SearchLogRequest Name Description Required Schema Default filters false object from false integer (int32) fromDate false string (date-time) query false string size false integer (int32) sortConfiguration false SortConfiguration toDate false string (date-time) AbstractEditorOperation Name Description Required Schema Default author false string id false string previousOperationId false string SetLocationPoliciesRequest Request to set locations policies for a deployment. Name Description Required Schema Default groupsToLocations Locations settings for groups. key = groupeName, value = locationId. Note that for now, the only groupe name valid is _A4C_ALL, as we do not yet support multiple locations policies settings. true object orchestratorId Id of the Orchestratrator managing the locations on which we want to deploy. true string Map«string,AbstractPropertyValue» Map«string,RelationshipType» ApplicationEnvironment Name Description Required Schema Default applicationId false string description false string environmentType false enum (OTHER, DEVELOPMENT, INTEGRATION_TESTS, USER_ACCEPTANCE_TESTS, PRE_PRODUCTION, PRODUCTION) groupRoles false object id false string name false string topologyVersion false string userRoles false object version false string Map«string,Array«ApplicationEnvironmentDTO»» CSARDependency Name Description Required Schema Default hash false string name false string version false string Map«string,NodeTemplate» Map«string,CapabilityType» Map«string,string» Map«string,DeploymentArtifact» RestResponse«FacetedSearchResult» Name Description Required Schema Default data false FacetedSearchResult error false RestError GetMultipleDataResult«ApplicationEnvironmentDTO» Name Description Required Schema Default data false ApplicationEnvironmentDTO array from false integer (int32) queryDuration false integer (int64) to false integer (int32) totalResults false integer (int64) types false string array "},{"title":"Definitions","baseurl":"","url":"/documentation/1.4.0/rest/definitions_applications-deployment-api.html","date":null,"categories":[],"body":" Map«string,AbstractStep» RelationshipType Name Description Required Schema Default abstract false boolean archiveName false string archiveVersion false string artifacts false object attributes false object creationDate false string (date-time) derivedFrom false string array description false string elementId false string id false string interfaces false object lastUpdateDate false string (date-time) nestedVersion false Version properties false object tags false Tag array validSources false string array validTargets false string array workspace false string Deployment Name Description Required Schema Default endDate false string (date-time) environmentId false string id false string locationIds false string array orchestratorDeploymentId false string orchestratorId false string serviceResourceIds false string array sourceId false string sourceName false string sourceType false enum (APPLICATION, CSAR) startDate false string (date-time) versionId false string workflowExecutions false object RequirementDefinition Name Description Required Schema Default capabilityName false string description false string id false string lowerBound false integer (int32) nodeFilter false NodeFilter nodeType false string relationshipType false string type false string upperBound false integer (int32) AbstractStep Name Description Required Schema Default followingSteps false string array name false string precedingSteps false string array GetMultipleJsonResult Name Description Required Schema Default data false string from false integer (int32) queryDuration false integer (int64) to false integer (int32) totalResults false integer (int64) Map«string,DataType» FilterDefinition Name Description Required Schema Default properties false object ScrollJsonResult Name Description Required Schema Default data false string queryDuration false integer (int64) scrollId false string totalResults false integer (int64) RestResponse«Void» Name Description Required Schema Default error false RestError RelationshipTemplate Name Description Required Schema Default artifacts false object attributes false object interfaces false object name false string properties false object requirementName false string requirementType false string target false string targetedCapabilityName false string type false string Map«string,Workflow» AbstractWorkflowError Version Name Description Required Schema Default buildNumber false integer (int32) incrementalVersion false integer (int32) majorVersion false integer (int32) minorVersion false integer (int32) qualifier false string DeploymentDTO Name Description Required Schema Default deployment false Deployment locations false Location array source false IDeploymentSource TreeNode Name Description Required Schema Default artifactId false string children false TreeNode array fullPath false string leaf false boolean name false string Map«string,object» DependencyConflictDTO Name Description Required Schema Default dependency false string resolvedVersion false string source false string PropertyValue«Topology» Name Description Required Schema Default definition false boolean value false Topology CapabilityDefinition Name Description Required Schema Default description false string id false string properties false object type false string upperBound false integer (int32) validSources false string array RestResponse«List«DeploymentDTO»» Name Description Required Schema Default data false DeploymentDTO array error false RestError DeploymentArtifact Name Description Required Schema Default archiveName false string archiveVersion false string artifactName false string artifactRef false string artifactRepository false string artifactType false string deployPath false string description false string repositoryCredential false object repositoryName false string repositoryURL false string Map«string,FilterDefinition» NodeGroup Name Description Required Schema Default index false integer (int32) members false string array name false string policies false AbstractPolicy array TimedRequest Name Description Required Schema Default from false integer (int32) intervalEnd false integer (int64) intervalStart false integer (int64) size false integer (int32) ScrollTimedRequest Name Description Required Schema Default intervalEnd false integer (int64) intervalStart false integer (int64) size false integer (int32) Capability Name Description Required Schema Default properties false object type false string Map«string,Capability» PropertyValue Name Description Required Schema Default definition false boolean value false object Workflow Name Description Required Schema Default description false string errors false AbstractWorkflowError array hosts false string array name false string standard false boolean steps false object Map«string,Interface» Map«string,RelationshipTemplate» Topology Name Description Required Schema Default archiveName false string archiveVersion false string creationDate false string (date-time) dependencies false CSARDependency array description false string empty false boolean groups false object id false string inputArtifacts false object inputs false object lastUpdateDate false string (date-time) nestedVersion false Version nodeTemplates false object outputAttributes false object outputCapabilityProperties false object outputProperties false object substitutionMapping false SubstitutionMapping workflows false object workspace false string Map«string,PropertyDefinition» Map«string,List«PropertyConstraint»» DeferredResult«RestResponse«object»» Name Description Required Schema Default result false object setOrExpired false boolean ImplementationArtifact Name Description Required Schema Default archiveName false string archiveVersion false string artifactRef false string artifactRepository false string artifactType false string repositoryCredential false object repositoryName false string repositoryURL false string Requirement Name Description Required Schema Default properties false object type false string RestResponse«object» Name Description Required Schema Default data false object error false RestError Map«string,Map«string,Set«string»»» NodeTemplate Name Description Required Schema Default artifacts false object attributes false object capabilities false object groups false string array interfaces false object name false string portability false object properties false object relationships false object requirements false object type false string TopologyDTO Name Description Required Schema Default archiveContentTree false TreeNode capabilityTypes false object dataTypes false object delegateType false string dependencyConflicts false DependencyConflictDTO array lastOperationIndex false integer (int32) nodeTypes false object operations false AbstractEditorOperation array relationshipTypes false object topology false Topology NodeType Name Description Required Schema Default abstract false boolean alienScore false integer (int64) archiveName false string archiveVersion false string artifacts false object attributes false object capabilities false CapabilityDefinition array creationDate false string (date-time) defaultCapabilities false string array derivedFrom false string array description false string elementId false string id false string interfaces false object lastUpdateDate false string (date-time) nestedVersion false Version portability false object properties false object requirements false RequirementDefinition array substitutionTopologyId false string tags false Tag array workspace false string GetMultipleDataResult Name Description Required Schema Default data false object array from false integer (int32) queryDuration false integer (int64) to false integer (int32) totalResults false integer (int64) types false string array Map«string,NodeGroup» RestResponse«string» Name Description Required Schema Default data false enum (DEPLOYED, UNDEPLOYED, INIT_DEPLOYMENT, DEPLOYMENT_IN_PROGRESS, UPDATE_IN_PROGRESS, UPDATED, UNDEPLOYMENT_IN_PROGRESS, WARNING, FAILURE, UPDATE_FAILURE, UNKNOWN) error false RestError Map«string,NodeType» Map«string,Operation» Operation Name Description Required Schema Default dependencies false DeploymentArtifact array description false string implementationArtifact false ImplementationArtifact inputParameters false object portability false object SubstitutionMapping Name Description Required Schema Default capabilities false object requirements false object substitutionType false string Map«string,SubstitutionTarget» PropertyConstraint IValue Name Description Required Schema Default definition false boolean IDeploymentSource Name Description Required Schema Default id false string name false string Map«string,Set«string»» RestError Name Description Required Schema Default code false integer (int32) message false string RestResponse«TopologyDTO» Name Description Required Schema Default data false TopologyDTO error false RestError AbstractPolicy Name Description Required Schema Default name false string type false string DataType Name Description Required Schema Default abstract false boolean archiveName false string archiveVersion false string creationDate false string (date-time) deriveFromSimpleType false boolean derivedFrom false string array description false string elementId false string id false string lastUpdateDate false string (date-time) nestedVersion false Version properties false object tags false Tag array workspace false string JsonRawRestResponse Name Description Required Schema Default data false string error false RestError OperationExecRequest Name Description Required Schema Default applicationEnvironmentId false string instanceId false string interfaceName false string nodeTemplateName false string operationName false string parameters false object Map«string,IValue» CapabilityType Name Description Required Schema Default abstract false boolean archiveName false string archiveVersion false string attributes false object creationDate false string (date-time) derivedFrom false string array description false string elementId false string id false string lastUpdateDate false string (date-time) nestedVersion false Version properties false object tags false Tag array validSources false string array workspace false string Map«string,Requirement» SubstitutionTarget Name Description Required Schema Default nodeTemplateName false string serviceRelationshipType false string targetId false string AbstractEditorOperation Name Description Required Schema Default author false string id false string previousOperationId false string Map«string,AbstractPropertyValue» PropertyDefinition Name Description Required Schema Default constraints false PropertyConstraint array default false PropertyValue«Topology» definition false boolean description false string entrySchema false PropertyDefinition password false boolean required false boolean suggestionId false string type false string Map«string,RelationshipType» CSARDependency Name Description Required Schema Default hash false string name false string version false string Map«string,NodeTemplate» Map«string,CapabilityType» Map«string,string» Map«string,DeploymentArtifact» RestResponse«GetMultipleDataResult» Name Description Required Schema Default data false GetMultipleDataResult error false RestError Tag Name Description Required Schema Default name false string value false string RestResponse«Map«string,NodeTemplate»» Name Description Required Schema Default data false object error false RestError AbstractPropertyValue Name Description Required Schema Default definition false boolean Interface Name Description Required Schema Default description false string operations false object type false string Location A location represents a cloud, a region of a cloud, a set of machines and resources.basically any location on which alien will be allowed to perform deployment. Locations are managed by orchestrators. Name Description Required Schema Default applicationPermissions false object creationDate false string (date-time) dependencies false CSARDependency array environmentPermissions false object environmentType false string groupPermissions false object id false string infrastructureType false string lastUpdateDate false string (date-time) metaProperties false object name false string orchestratorId false string userPermissions false object NodeFilter Name Description Required Schema Default capabilities false object properties false object "},{"title":"Definitions","baseurl":"","url":"/documentation/1.4.0/rest/definitions_catalog-api.html","date":null,"categories":[],"body":" Map«string,AbstractStep» RestResponse«List«Usage»» Name Description Required Schema Default data false Usage array error false RestError RequirementDefinition Name Description Required Schema Default capabilityName false string description false string id false string lowerBound false integer (int32) nodeFilter false NodeFilter nodeType false string relationshipType false string type false string upperBound false integer (int32) RecommendationRequest Name Description Required Schema Default capability false string componentId false string RestResponse«NodeType» Name Description Required Schema Default data false NodeType error false RestError AbstractStep Name Description Required Schema Default followingSteps false string array name false string precedingSteps false string array Map«string,Array«string»» FilterDefinition Name Description Required Schema Default properties false object FilteredSearchRequest Name Description Required Schema Default filters false object from false integer (int32) query false string size false integer (int32) CatalogVersionResult Name Description Required Schema Default id false string version false string RestResponse«Void» Name Description Required Schema Default error false RestError RelationshipTemplate Name Description Required Schema Default artifacts false object attributes false object interfaces false object name false string properties false object requirementName false string requirementType false string target false string targetedCapabilityName false string type false string Map«string,Workflow» AbstractWorkflowError Version Name Description Required Schema Default buildNumber false integer (int32) incrementalVersion false integer (int32) majorVersion false integer (int32) minorVersion false integer (int32) qualifier false string CreateTopologyRequest Name Description Required Schema Default description false string fromTopologyId false string name false string version false string RestResponse«CsarUploadResult» Name Description Required Schema Default data false CsarUploadResult error false RestError Map«string,Array«FacetedSearchFacet»» CsarUploadResult Name Description Required Schema Default csar false Csar errors false object CapabilityDefinition Name Description Required Schema Default description false string id false string properties false object type false string upperBound false integer (int32) validSources false string array DeploymentArtifact Name Description Required Schema Default archiveName false string archiveVersion false string artifactName false string artifactRef false string artifactRepository false string artifactType false string deployPath false string description false string repositoryCredential false object repositoryName false string repositoryURL false string Map«string,FilterDefinition» RestResponse«Topology» Name Description Required Schema Default data false Topology error false RestError NodeGroup Name Description Required Schema Default index false integer (int32) members false string array name false string policies false AbstractPolicy array ParsingContext Name Description Required Schema Default fileName false string parsingErrors false ParsingError array Capability Name Description Required Schema Default properties false object type false string RestResponse«boolean» Name Description Required Schema Default data false boolean error false RestError Map«string,Capability» RestResponse«List«ParsingResult«Csar»»» Name Description Required Schema Default data false ParsingResult«Csar» array error false RestError PropertyValue Name Description Required Schema Default definition false boolean value false object Workflow Name Description Required Schema Default description false string errors false AbstractWorkflowError array hosts false string array name false string standard false boolean steps false object Map«string,Interface» Map«string,RelationshipTemplate» Map«string,PropertyDefinition» CsarGitRepository Name Description Required Schema Default id false string importLocations false Information of the branch and eventually folder on the branch to import as an alien csar. array password false string repositoryUrl false string storedLocally false boolean username false string Topology Name Description Required Schema Default archiveName false string archiveVersion false string creationDate false string (date-time) dependencies false CSARDependency array description false string empty false boolean groups false object id false string inputArtifacts false object inputs false object lastUpdateDate false string (date-time) nestedVersion false Version nodeTemplates false object outputAttributes false object outputCapabilityProperties false object outputProperties false object substitutionMapping false SubstitutionMapping workflows false object workspace false string Map«string,List«PropertyConstraint»» Information of the branch and eventually folder on the branch to import as an alien csar. Name Description Required Schema Default branchId Id of the git branch to import. true string subPath Optional path of the location in which lies the csar to be imported. false string ImplementationArtifact Name Description Required Schema Default archiveName false string archiveVersion false string artifactRef false string artifactRepository false string artifactType false string repositoryCredential false object repositoryName false string repositoryURL false string Requirement Name Description Required Schema Default properties false object type false string Map«string,Map«string,Set«string»»» NodeTemplate Name Description Required Schema Default artifacts false object attributes false object capabilities false object groups false string array interfaces false object name false string portability false object properties false object relationships false object requirements false object type false string RestResponse«FacetedSearchResult«Topology»» Name Description Required Schema Default data false FacetedSearchResult«Topology» error false RestError Map«string,List«ParsingError»» AbstractToscaType Name Description Required Schema Default archiveName false string archiveVersion false string creationDate false string (date-time) description false string elementId false string id false string lastUpdateDate false string (date-time) nestedVersion false Version tags false Tag array workspace false string NodeType Name Description Required Schema Default abstract false boolean alienScore false integer (int64) archiveName false string archiveVersion false string artifacts false object attributes false object capabilities false CapabilityDefinition array creationDate false string (date-time) defaultCapabilities false string array derivedFrom false string array description false string elementId false string id false string interfaces false object lastUpdateDate false string (date-time) nestedVersion false Version portability false object properties false object requirements false RequirementDefinition array substitutionTopologyId false string tags false Tag array workspace false string Request for creation of a new csar git repository. Name Description Required Schema Default importLocations Information of branches and eventually folders to import for the given repository. true Information of the branch and eventually folder on the branch to import as an alien csar. array password Password to access the git repository. false string repositoryUrl Url of the git repository. true string storedLocally Flag to know if the repository should be kept on the alien4cloud server disk (so next imports will be faster). false boolean username Username to access the git repository. false string Map«string,NodeGroup» ParsingResult«Csar» Name Description Required Schema Default context false ParsingContext result false Csar RestResponse«string» Name Description Required Schema Default data false string error false RestError SimpleMark Name Description Required Schema Default column false integer (int32) line false integer (int32) RestResponse«GetMultipleDataResult«CsarGitRepository»» Name Description Required Schema Default data false GetMultipleDataResult«CsarGitRepository» error false RestError UpdateTagRequest Name Description Required Schema Default tagKey false string tagValue false string Map«string,Operation» Operation Name Description Required Schema Default dependencies false DeploymentArtifact array description false string implementationArtifact false ImplementationArtifact inputParameters false object portability false object SubstitutionMapping Name Description Required Schema Default capabilities false object requirements false object substitutionType false string RestResponse«AbstractToscaType» Name Description Required Schema Default data false AbstractToscaType error false RestError FacetedSearchResult«AbstractToscaType» Name Description Required Schema Default data false AbstractToscaType array facets false object from false integer (int32) queryDuration false integer (int64) to false integer (int32) totalResults false integer (int64) types false string array Map«string,SubstitutionTarget» PropertyConstraint IValue Name Description Required Schema Default definition false boolean RestResponse«FacetedSearchResult«AbstractToscaType»» Name Description Required Schema Default data false FacetedSearchResult«AbstractToscaType» error false RestError RestError Name Description Required Schema Default code false integer (int32) message false string Map«string,Set«string»» ComponentSearchRequest Name Description Required Schema Default filters false object from false integer (int32) query false string size false integer (int32) type false enum (NODE_TYPE, CAPABILITY_TYPE, RELATIONSHIP_TYPE, ARTIFACT_TYPE) AbstractPolicy Name Description Required Schema Default name false string type false string FacetedSearchResult«Topology» Name Description Required Schema Default data false Topology array facets false object from false integer (int32) queryDuration false integer (int64) to false integer (int32) totalResults false integer (int64) types false string array FacetedSearchResult Name Description Required Schema Default data false object array facets false object from false integer (int32) queryDuration false integer (int64) to false integer (int32) totalResults false integer (int64) types false string array Csar Name Description Required Schema Default definitionHash false string delegateId false string delegateType false string dependencies false CSARDependency array description false string hash false string id false string importDate false string (date-time) importSource false string license false string name false string nestedVersion false Version tags false Tag array templateAuthor false string toscaDefaultNamespace false string toscaDefinitionsVersion false string version false string workspace false string yamlFilePath false string Usage Name Description Required Schema Default resourceId false string resourceName false string resourceType false string workspace false string Map«string,IValue» GetMultipleDataResult«CsarGitRepository» Name Description Required Schema Default data false CsarGitRepository array from false integer (int32) queryDuration false integer (int64) to false integer (int32) totalResults false integer (int64) types false string array Map«string,Requirement» RestResponse«Array«CatalogVersionResult»» Name Description Required Schema Default data false CatalogVersionResult array error false RestError CsarInfoDTO Name Description Required Schema Default csar false Csar relatedResources false Usage array SubstitutionTarget Name Description Required Schema Default nodeTemplateName false string serviceRelationshipType false string targetId false string RestResponse«CsarInfoDTO» Name Description Required Schema Default data false CsarInfoDTO error false RestError PropertyDefinition Name Description Required Schema Default constraints false PropertyConstraint array default false PropertyValue definition false boolean description false string password false boolean required false boolean suggestionId false string type false string Map«string,AbstractPropertyValue» CSARDependency Name Description Required Schema Default hash false string name false string version false string Map«string,NodeTemplate» ParsingError Name Description Required Schema Default context false string endMark false SimpleMark errorCode false enum (INVALID_YAML, CSAR_ALREADY_INDEXED, CSAR_ALREADY_EXISTS, CSAR_ALREADY_EXISTS_IN_ANOTHER_WORKSPACE, CSAR_IMPORT_ITSELF, UNSUPPORTED_SUBSTITUTION, DERIVED_FROM_CONCRETE_TYPE_SUBSTITUTION, DEPENDENCY_NOT_VISIBLE_FROM_TARGET_WORKSPACE, CSAR_USED_IN_ACTIVE_DEPLOYMENT, SINGLE_DEFINITION_SUPPORTED, ENTRY_DEFINITION_NOT_FOUND, ERRONEOUS_ARCHIVE_FILE, SYNTAX_ERROR, MISSING_TOSCA_VERSION, UNKNOWN_TOSCA_VERSION, TOSCA_VERSION_NOT_FIRST, UNRECOGNIZED_PROPERTY, UNKNWON_DISCRIMINATOR_KEY, MISSING_FILE, FAILED_TO_READ_FILE, DUPLICATED_ELEMENT_DECLARATION, TYPE_NOT_FOUND, CYCLIC_DERIVED_FROM, DERIVED_FROM_NOTHING, INVALID_ICON_FORMAT, ALIEN_MAPPING_ERROR, VALIDATION_ERROR, UNKNOWN_CONSTRAINT, INVALID_CONSTRAINT, MISSING_DEPENDENCY, SNAPSHOT_DEPENDENCY, INVALID_SCALAR_UNIT, POTENTIAL_BAD_PROPERTY_VALUE, UNKNOWN_ARTIFACT_KEY, UNKNOWN_REPOSITORY, INVALID_ARTIFACT_REFERENCE, UNRESOLVED_ARTIFACT, TOPOLOGY_DETECTED, TOPOLOGY_UPDATED, MISSING_TOPOLOGY_INPUT, YAML_SEQUENCE_EXPECTED, YAML_MAPPING_NODE_EXPECTED, YAML_SCALAR_NODE_EXPECTED, UNKNOWN_CAPABILITY, REQUIREMENT_TARGET_NODE_TEMPLATE_NAME_REQUIRED, REQUIREMENT_NOT_FOUND, REQUIREMENT_TARGET_NOT_FOUND, REQUIREMENT_CAPABILITY_MULTIPLE_MATCH, REQUIREMENT_CAPABILITY_NOT_FOUND, OUTPUTS_BAD_PARAMS_COUNT, OUTPUTS_UNKNOWN_FUNCTION, UNKOWN_GROUP_POLICY, UNKOWN_GROUP_MEMBER, EMPTY_TOPOLOGY, UNKNWON_WORKFLOW_STEP, WORKFLOW_HAS_ERRORS, INVALID_NODE_TEMPLATE_NAME, INVALID_NAME, TOSCA_TYPE_ALREADY_EXISTS_IN_OTHER_CSAR, TRANSITIVE_DEPENDENCY_VERSION_CONFLICT, DEPENDENCY_VERSION_CONFLICT) errorLevel false enum (ERROR, WARNING, INFO) note false string problem false string startMark false SimpleMark ElementFromArchiveRequest Name Description Required Schema Default componentType false enum (NODE_TYPE, CAPABILITY_TYPE, RELATIONSHIP_TYPE, ARTIFACT_TYPE) dependencies false CSARDependency array elementName false string Map«string,DeploymentArtifact» RestResponse«FacetedSearchResult» Name Description Required Schema Default data false FacetedSearchResult error false RestError RestResponse«CsarGitRepository» Name Description Required Schema Default data false CsarGitRepository error false RestError Tag Name Description Required Schema Default name false string value false string AbstractPropertyValue Name Description Required Schema Default definition false boolean Interface Name Description Required Schema Default description false string operations false object type false string NodeFilter Name Description Required Schema Default capabilities false object properties false object "},{"title":"Definitions document","baseurl":"","url":"/documentation/1.4.0/devops_guide/tosca_grammar/definitions_document.html","date":null,"categories":[],"body":"The root element of a definition file is called the Service Template. A TOSCA Definitions YAML document contains element definitions of building blocks (types) for cloud application, or complete models of cloud applications (templates). This section describes the top-level structural elements (i.e., YAML keys) which are allowed to appear in a TOSCA Definitions YAML document. Keynames A TOSCA Definitions file contains the following element keynames: Keyname Required Type Description tosca_definitions_version tosca_definitions_version yes string Defines the version of the TOSCA Simple Profile specification the template (grammar) complies with (1) . Recommended version is alien_dsl_1_3_0 alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 metadata yes(2) map of string Defines a section used to declare additional metadata information. When using tosca_simple_yaml_1_0 in alien4cloud the metadata section must be defined and must defined the template_name, and template_version recognized metadata. alien_dsl_1_3_0, tosca_simple_yaml_1_0 template_name yes(2) string Declares the name of the template. alien_dsl_1_3_0 alien_dsl_1_2_0 template_version yes(2) version Declares the version string for the template. alien_dsl_1_3_0 alien_dsl_1_2_0 template_author no string Declares the author(s) of the template. alien_dsl_1_3_0 alien_dsl_1_2_0 description no string Declares a description for this Service Template and its contents. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 dsl_definitions no map of yaml macros Declares optional DSL-specific definitions and conventions. For example, in YAML, this allows defining reusable YAML macros (i.e., YAML alias anchors) for use throughout the TOSCA Service Template. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 repositories no map of repository definitions Declares the list of external repositories which contain artifacts that are referenced in the service template along with their addresses and necessary credential information used to connect to them in order to retrieve the artifacts. alien_dsl_1_3_0 tosca_simple_yaml_1_0 imports no list of import strings__(3)__ Declares import statements external TOSCA Definitions documents (files). alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 artifact_types no map of artifact types This section contains an optional list of artifact type definitions for use in service templates. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 data_types no map of data types Declares a list of optional TOSCA Data Type definitions. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 capability_types no map of capability types This section contains an optional list of capability type definitions for use in service templates. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 interface_types no list of interface type Interface types is not supported in alien4cloud. Interfaces are defined directly on the node and relationship types as it was in previous draft of TOSCA specification N.A. relationship_types no list of relationship types This section contains a set of relationship type definitions for use in service templates. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 node_types no list of node types This section contains a set of node type definitions for use in service templates. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 group_types no list of group types Group types are not yet supported in alien4cloud. They actually are not well documented in TOSCA and while we do support grouping inside alien4cloud there is no types associated with them N.A. policy_types no list of policy types Policy types are not yet supported in alien4cloud. All elements of policies are not fully defined in TOSCA and while we do support some policies in alien4cloud they are not exposed as TOSCA types and it is not possible yet to add them in a dynamic way. N.A. topology_template no Topology template definition Defines the topology template of an application or service, consisting of node templates that represent the application’s or service’s components, as well as relationship templates representing relations between the components. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 (1) When the tosca_definitions_version is set to tosca_simple_yaml_1_0 an automatic direct import of the TOSCA normative types version 1.0.0 is added to the service template. Alien 4 Cloud is not currently packaged with version 1.0.0 of the normative types as there are still some minor differences with alien’s supported types. If you wish to use the tosca_simple_yaml_1_0 make sure that you upload the types first. (2) In Alien 4 Cloud the template name and versions are required as we supports versioning of the templates and indexing of elements in a catalog. In TOSCA specification they are optional. To specify versions using tosca_simple_yaml_1_0 definition version you must define the template_name and template_version in the metadata section. Using metadata is compliant with tosca specification and will be the future way to define this in alien4cloud. 1.4.0 version has a bug preventing the support of this definition in alien_dsl_1_3_0 which is fixed in 1.3.1. In alien_dsl_1_3_0 alien_dsl_1_2_0 alien_dsl_1_1_0 it is possible to define the template_name and template_version directly at the root of the definition document. (3) Alien 4 cloud currently supports an import syntax based on template names and versions. We believe it is a better way to reference dependencies but this is not yet acknowledge by TOSCA. On the other hand we don’t yet support relative or url based imports. Grammar The overall structure of a TOSCA Service Template and its top-level key collations using the TOSCA Simple Profile is shown below: tosca_definitions_version : # Required TOSCA Definitions version string # Specific to alien_dsl_1_3_0 since 1.3.1 and tosca_simple_yaml_1_0 metadata : template_name : # Optional name of this service template template_author : # Optional author of this service template template_version : # Optional version of this service template # Specific to alien_dsl_1_3_0 and alien_dsl_1_2_0 template_name : # Optional name of this service template template_author : # Optional author of this service template template_version : # Optional version of this service template description : A short description of the definitions inside the file. dsl_definitions : # map of YAML alias anchors (or macros) repositories : # map of repositories imports : # list of import statements for importing other definitions files artifact_types : # list of artifact type definitions data_types : # list of data type definitions capability_types : # list of capability type definitions relationship_types : # list of relationship type definitions node_types : # list of node type definitions topology_template : # Topology template definition tosca_definitions_version This required element provides a means to include a reference to the TOSCA Simple Profile specification within the TOSCA Definitions YAML file. It is an indicator for the version of the TOSCA grammar that should be used to parse the remainder of the document. Keyword tosca_definitions_version Grammar tosca_definitions_version : <tosca_simple_profile_version> Examples: Alien 4 cloud TOSCA Simple Profile version 1.3.1 specification using the defined namespace alias: tosca_definitions_version : alien_dsl_1_3_0 TOSCA Simple Profile version 1.0 specification using the defined namespace alias: tosca_definitions_version : tosca_simple_yaml_1_0_0 TOSCA Simple Profile version 1.0 specification using the fully defined (target) namespace: tosca_definitions_version : http://docs.oasis-open.org/tosca/simple/1.0 metadata Meta data section allows to declare additional metadata information including the template name, version and author. Before 1.3.1 metadata section is not supported in alien_dsl_1_3_0 and you should use the template_name, template_version and template_author at the root level of the definition document. metadata : template_name : <name string> template_version : <version> template_name This optional element declares the optional name of service template as a single-line string value. Keyword template_name Grammar template_name : <name string> Example template_name : My service template Notes Some service templates are designed to be referenced and reused by other service templates. Therefore, in these cases, the template_name value SHOULD be designed to be used as a unique identifier through the use of namespacing techniques. template_author This optional element declares the optional author(s) of the service template as a single-line string value. Keyword template_author ##### Grammar template_author : <author string> Example template_author : My service template template_version This element declares the optional version of the service template as a single-line string value. Grammar template_version : <version> Example template_version : 2.0.17 Some service templates are designed to be referenced and reused by other service templates and have a lifecycle of their own. Therefore, in these cases, a template_version value SHOULD be included and used in conjunction with a unique template_name value to enable lifecycle management of the service template and its contents. description This optional element provides a means to include single or multiline descriptions within a TOSCA Simple Profile template as a scalar string value. imports This optional element provides a way to import a block sequence of one or more TOSCA Definitions documents. TOSCA Definitions documents can contain reusable TOSCA type definitions (e.g., Node Types, Relationship Types, Artifact Types, etc.) defined by other authors. This mechanism provides an effective way for companies and organizations to define normative types and/or describe their software applications for reuse in other TOSCA Service Templates. In Alien 4 Cloud you can import libraries from the repository instead of having to package every required elements within your archives. This also allows a better management of versioning and dependencies. In order to support this scenario the import element supports an additional non-normative definition. We also do not support relative imports or url based imports in the current version of alien4cloud. Grammar imports : - <tosca_definitions_file_1> - ... - <tosca_definitions_file_n> Alien 4 Cloud specific grammar for catalog imports based on Definitions template names and versions. imports : - <tosca_template_name_1>:<tosca_template_version_1> - ... - <tosca_template_name_n>:<tosca_template_version_n> Example # An example import of definitions files from a location relative to the # file location of the service template declaring the import. # Note that this notation is not yet supported by alien4cloud. imports : - relative_path/my_defns/my_typesdefs_1.yaml - ... - relative_path/my_defns/my_typesdefs_n.yaml Alien 4 Cloud specific. imports : - <tosca-normative-types>:<1.1.0> - ... - <apache-server>:<2.0.3> dsl_definitions This optional element provides a section to define macros. A macro can be reused elsewhere by referencing it. Example In the following example, we define a ‘macro’ named ‘my_compute_node_props’ which defines a property ‘os_type’ and it’s value. It is used for the both nodes compute1 and compute2. dsl_definitions : my_compute_node_props : &my_compute_node_props os_type : linux topology_template : node_templates : compute1 : type : tosca.nodes.Compute properties : *my_compute_node_props compute2 : type : tosca.nodes.Compute properties : *my_compute_node_props capability_types This element lists the Capability Types that provide the reusable type definitions that can be used to describe features Node Templates or Node Types can declare they support. Grammar capability_types : <capability_type_defn_1> ... <capability type_defn_n> Example capability_types : mycompany.mytypes.myCustomEndpoint : derived_from : tosca.capabilities.Endpoint properties : # more details ... mycompany.mytypes.myCustomFeature : derived_from : tosca.capabilites.Feature properties : # more details ... relationship_types This element lists the Relationship Types that provide the reusable type definitions that can be used to describe dependent relationships between Node Templates or Node Types. Grammar relationship_types : <relationship_type_defn_1> ... <relationship type_defn_n> Example relationship_types : mycompany.mytypes.myCustomClientServerType : derived_from : tosca.relationships.HostedOn properties : # more details ... mycompany.mytypes.myCustomConnectionType : derived_from : tosca.relationships.ConnectsTo properties : # more details ... node_types This element lists the Node Types that provide the reusable type definitions for software components that Node Templates can be based upon. Grammar node_types : <node_types_defn_1> ... <node_type_defn_n> Example node_types : my_webapp_node_type : derived_from : WebApplication properties : my_port : type : integer my_database_node_type : derived_from : Database capabilities : mytypes.myfeatures.transactSQL The node types listed as part of the node_types block can be mapped to the list of NodeType definitions as described by the TOSCA v1.0 specification. topology_template see: - Topology template "},{"title":"Definitions","baseurl":"","url":"/documentation/1.4.0/rest/definitions_other-apis.html","date":null,"categories":[],"body":" SortConfiguration Name Description Required Schema Default ascending false boolean sortBy false string RestResponse«TopologyPortabilityInsight» Name Description Required Schema Default data false TopologyPortabilityInsight error false RestError RepositoryPluginComponent Name Description Required Schema Default pluginComponent false Result for a request for specific plugin components. repositoryType false string UpdateServiceResourceRequest Request to update a service resource. Name Description Required Schema Default capabilitiesRelationshipTypes Map capability name -> relationship type id that optionally defines a relationship type to use to perform the service side operations to connect to the service on a given capability false object description The description of the service. false string locationIds The list of locations. false string array name The name of the service. true string nodeInstance The node instance definition for the service. true Represents a simple node instance with it’s properties and attributes. requirementsRelationshipTypes Map requirement name -> relationship type id that optionally defines a relationship type to use to perform the service side operations to connect to the service on a given requirement. false object version The version of the service. true string UserDTO Name Description Required Schema Default email false string firstName false string lastName false string username false string GroupDTO Name Description Required Schema Default description false string email false string id false string name false string Creation request for a suggestion. Name Description Required Schema Default esIndex Id of elasticsearch index where, the property to be suggested, is located . false string esType Id of elasticsearch type where, the property to be suggested, is located. false string suggestions List of initial values for suggestions. false string array targetElementId Id of the element where, the property to be suggested, is located. false string targetProperty Id of the property to be suggested. false string UserStatus Name Description Required Schema Default authSystem false string githubUsername false string groups false string array isLogged false boolean roles false Collection«string» username false string PaaSDeploymentLog Name Description Required Schema Default content false string deploymentId false string deploymentPaaSId false string executionId false string id false string instanceId false string interfaceName false string level false enum (debug, info, warn, error) nodeId false string operationName false string timestamp false string (date-time) type false string workflowId false string Map«string,Array«string»» FilteredSearchRequest Name Description Required Schema Default filters false object from false integer (int32) query false string size false integer (int32) RestResponse«Void» Name Description Required Schema Default error false RestError RelationshipTemplate Name Description Required Schema Default artifacts false object attributes false object interfaces false object name false string properties false object requirementName false string requirementType false string target false string targetedCapabilityName false string type false string Version Name Description Required Schema Default buildNumber false integer (int32) incrementalVersion false integer (int32) majorVersion false integer (int32) minorVersion false integer (int32) qualifier false string RestResponse«FacetedSearchResult«PaaSDeploymentLog»» Name Description Required Schema Default data false FacetedSearchResult«PaaSDeploymentLog» error false RestError PortableLocationDTO Name Description Required Schema Default environmentNames false string array infrastructureType false string locationId false string locationName false string orchestratorId false string orchestratorName false string portabilityLevel false enum (ERROR, WARNING, INFO) Map«string,Array«FacetedSearchFacet»» PatchServiceResourceRequest Request to update a service resource. Name Description Required Schema Default capabilitiesRelationshipTypes Map capability name -> relationship type id that optionally defines a relationship type to use to perform the service side operations to connect to the service on a given capability false object description The new description of the service or undefined if update request should not update the service description. false string locationIds The new list of location ids or undefined if update request should not update the service location ids. false string array name The new name of the service or undefined if the update request should not update the service name. false string nodeInstance The new node instance definition for the service or undefined if update request should not update the node instance definition. false Represents a simple node instance with it’s properties and attributes. requirementsRelationshipTypes Map requirement name -> relationship type id that optionally defines a relationship type to use to perform the service side operations to connect to the service on a given requirement. false object version The new version of the service or undefined if the update request should not update the service version. false string Collection«string» CreateRepositoryRequest Name Description Required Schema Default configuration false object name false string pluginId false string DeploymentArtifact Name Description Required Schema Default archiveName false string archiveVersion false string artifactName false string artifactRef false string artifactRepository false string artifactType false string deployPath false string description false string repositoryCredential false object repositoryName false string repositoryURL false string FacetedSearchResult«PaaSDeploymentLog» Name Description Required Schema Default data false PaaSDeploymentLog array facets false object from false integer (int32) queryDuration false integer (int64) to false integer (int32) totalResults false integer (int64) types false string array Capability Name Description Required Schema Default properties false object type false string Map«string,Capability» Map«string,Interface» Map«string,RelationshipTemplate» RestResponse«Service.» Name Description Required Schema Default data false Service. error false RestError RestResponse«Array«string»» Name Description Required Schema Default data false string array error false RestError ImplementationArtifact Name Description Required Schema Default archiveName false string archiveVersion false string artifactRef false string artifactRepository false string artifactType false string repositoryCredential false object repositoryName false string repositoryURL false string Requirement Name Description Required Schema Default properties false object type false string RestResponse«object» Name Description Required Schema Default data false object error false RestError NodeTemplate Name Description Required Schema Default artifacts false object attributes false object capabilities false object groups false string array interfaces false object name false string portability false object properties false object relationships false object requirements false object type false string Represents a simple node instance with it’s properties and attributes. Name Description Required Schema Default attributeValues Map of values for the runtime attributes of a tosca instance. false object capabilities Map of capability that contains the values of the properties as defined in the instance type. false object properties Map of property values that must match the properties defined in the instance type. false object type The tosca node type of the instance. true string typeVersion The version of the tosca node type of the instance. true string Service. A service is something running somewhere, exposing capabilities and requirements, matchable in a topology in place of an abstract component. Name Description Required Schema Default applicationPermissions false object capabilitiesRelationshipTypes false object creationDate false string (date-time) dependency false CSARDependency deploymentId false string description false string environmentId false string environmentPermissions false object groupPermissions false object id false string lastUpdateDate false string (date-time) locationIds false string array name false string nestedVersion false Version nodeInstance false NodeInstance requirementsRelationshipTypes false object userPermissions false object version false string RestResponse«ConstraintInformation» Name Description Required Schema Default data false ConstraintInformation error false RestError GetMultipleDataResult Name Description Required Schema Default data false object array from false integer (int32) queryDuration false integer (int64) to false integer (int32) totalResults false integer (int64) types false string array NodeInstance Name Description Required Schema Default attributeValues false object nodeTemplate false NodeTemplate typeVersion false string Application Name Description Required Schema Default creationDate false string (date-time) description false string groupRoles false object id false string imageId false string lastUpdateDate false string (date-time) metaProperties false object name false string tags false Tag array userRoles false object SortedSearchRequest Name Description Required Schema Default desc false boolean filters false object from false integer (int32) query false string size false integer (int32) sortField false string RestResponse«string» Name Description Required Schema Default data false string error false RestError RestResponse«List«GroupDTO»» Name Description Required Schema Default data false GroupDTO array error false RestError RestResponse«GetMultipleDataResult«Service.»» Name Description Required Schema Default data false GetMultipleDataResult«Service.» error false RestError GetMultipleDataResult«Service.» Name Description Required Schema Default data false Service. array from false integer (int32) queryDuration false integer (int64) to false integer (int32) totalResults false integer (int64) types false string array Map«string,Operation» Operation Name Description Required Schema Default dependencies false DeploymentArtifact array description false string implementationArtifact false ImplementationArtifact inputParameters false object portability false object RestResponse«List«UserDTO»» Name Description Required Schema Default data false UserDTO array error false RestError IValue Name Description Required Schema Default definition false boolean TopologyPortabilityInsight Name Description Required Schema Default limitations false PortabilityLimitation array locationLimitations false object locations false PortableLocationDTO array RestError Name Description Required Schema Default code false integer (int32) message false string Map«string,Set«string»» Request for creation of a new service. Name Description Required Schema Default name Name of the new service (must be unique for a given version). true string nodeType The node type to use to build the service node template. true string nodeTypeVersion Archive version of the node type. true string version Version of the new service. true string ApplicationEnvironmentAuthorizationUpdateRequest Name Description Required Schema Default applicationsToAdd false string array applicationsToDelete false string array environmentsToAdd false string array environmentsToDelete false string array resources false string array PortabilityLimitation Name Description Required Schema Default code false enum (NOT_NORMATIVE, ORCHESTRATOR_DEPENDENT, IAAS_DEPENDENT, LOCATION_RESOURCE_MATCH, ARTIFACT_NOT_SUPPORTED_ON_HOST, RUNTIME_PACKAGE_NOT_SATISFIED, ARTIFACT_AND_RUNTIME_NOT_SATISFIED, ORCHESTRATOR_CONFLICT, IAAS_CONFLICT, ARTIFACT_SUPPORT, RUNTIME_PACKAGE) info false string array level false enum (ERROR, WARNING, INFO) RestResponse«List«ApplicationEnvironmentAuthorizationDTO»» Name Description Required Schema Default data false ApplicationEnvironmentAuthorizationDTO array error false RestError Map«string,List«PortabilityLimitation»» FacetedSearchResult Name Description Required Schema Default data false object array facets false object from false integer (int32) queryDuration false integer (int64) to false integer (int32) totalResults false integer (int64) types false string array RestResponse«UserStatus» Name Description Required Schema Default data false UserStatus error false RestError Result for a request for specific plugin components. Name Description Required Schema Default componentDescriptor Description of the component within the plugin. false Describe a component of a plugin (can be an IOrchestrator etc.). pluginId Id of the plugin that contains the component. false string pluginName Name of the plugin that contains the component. false string version Version of the plugin that contains the component. false string Map«string,IValue» Describe a component of a plugin (can be an IOrchestrator etc.). Name Description Required Schema Default beanName Name of the component bean in the plugin spring context. false string description Description of the plugin. false string name Name of the plugin component. false string type Type of the plugin. false string Map«string,Requirement» SearchLogRequest Name Description Required Schema Default filters false object from false integer (int32) fromDate false string (date-time) query false string size false integer (int32) sortConfiguration false SortConfiguration toDate false string (date-time) ConstraintInformation Name Description Required Schema Default name false string path false string reference false object type false string value false string Map«string,AbstractPropertyValue» ApplicationEnvironment Name Description Required Schema Default applicationId false string description false string environmentType false enum (OTHER, DEVELOPMENT, INTEGRATION_TESTS, USER_ACCEPTANCE_TESTS, PRE_PRODUCTION, PRODUCTION) groupRoles false object id false string name false string topologyVersion false string userRoles false object version false string RestResponse«List«RepositoryPluginComponent»» Name Description Required Schema Default data false RepositoryPluginComponent array error false RestError CSARDependency Name Description Required Schema Default hash false string name false string version false string BasicSearchRequest Name Description Required Schema Default from false integer (int32) query false string size false integer (int32) UpdateRepositoryRequest Name Description Required Schema Default configuration false object name false string Map«string,DeploymentArtifact» Map«string,string» ApplicationEnvironmentAuthorizationDTO Name Description Required Schema Default application false Application environments false ApplicationEnvironment array RestResponse«FacetedSearchResult» Name Description Required Schema Default data false FacetedSearchResult error false RestError RestResponse«GetMultipleDataResult» Name Description Required Schema Default data false GetMultipleDataResult error false RestError Tag Name Description Required Schema Default name false string value false string AbstractPropertyValue Name Description Required Schema Default definition false boolean Interface Name Description Required Schema Default description false string operations false object type false string "},{"title":"Definitions","baseurl":"","url":"/documentation/1.4.0/rest/definitions_topology-editor-api.html","date":null,"categories":[],"body":" ILocationMatch Name Description Required Schema Default location false Location orchestrator false Orchestrator. ready false boolean reasons false object Map«string,AbstractStep» RelationshipType Name Description Required Schema Default abstract false boolean archiveName false string archiveVersion false string artifacts false object attributes false object creationDate false string (date-time) derivedFrom false string array description false string elementId false string id false string interfaces false object lastUpdateDate false string (date-time) nestedVersion false Version properties false object tags false Tag array validSources false string array validTargets false string array workspace false string RequirementDefinition Name Description Required Schema Default capabilityName false string description false string id false string lowerBound false integer (int32) nodeFilter false NodeFilter nodeType false string relationshipType false string type false string upperBound false integer (int32) AbstractStep Name Description Required Schema Default followingSteps false string array name false string precedingSteps false string array Map«string,DataType» FilterDefinition Name Description Required Schema Default properties false object RelationshipTemplate Name Description Required Schema Default artifacts false object attributes false object interfaces false object name false string properties false object requirementName false string requirementType false string target false string targetedCapabilityName false string type false string Map«string,Workflow» AbstractWorkflowError Version Name Description Required Schema Default buildNumber false integer (int32) incrementalVersion false integer (int32) majorVersion false integer (int32) minorVersion false integer (int32) qualifier false string TreeNode Name Description Required Schema Default artifactId false string children false TreeNode array fullPath false string leaf false boolean name false string Map«string,object» DependencyConflictDTO Name Description Required Schema Default dependency false string resolvedVersion false string source false string PropertyValue«Topology» Name Description Required Schema Default definition false boolean value false Topology CapabilityDefinition Name Description Required Schema Default description false string id false string properties false object type false string upperBound false integer (int32) validSources false string array DeploymentArtifact Name Description Required Schema Default archiveName false string archiveVersion false string artifactName false string artifactRef false string artifactRepository false string artifactType false string deployPath false string description false string repositoryCredential false object repositoryName false string repositoryURL false string Map«string,FilterDefinition» NodeGroup Name Description Required Schema Default index false integer (int32) members false string array name false string policies false AbstractPolicy array Capability Name Description Required Schema Default properties false object type false string Orchestrator. An orchestrator is alien 4 cloud is a software engine that alien 4 cloud connects to in order to orchestrate a topology deployment. An orchestrator may manage one or multiple locations. Name Description Required Schema Default deploymentNamePattern false string id false string name false string pluginBean false string pluginId false string state false enum (DISABLED, CONNECTING, CONNECTED, DISCONNECTED) Map«string,Capability» PropertyValue Name Description Required Schema Default definition false boolean value false object Workflow Name Description Required Schema Default description false string errors false AbstractWorkflowError array hosts false string array name false string standard false boolean steps false object Map«string,Interface» Map«string,RelationshipTemplate» Topology Name Description Required Schema Default archiveName false string archiveVersion false string creationDate false string (date-time) dependencies false CSARDependency array description false string empty false boolean groups false object id false string inputArtifacts false object inputs false object lastUpdateDate false string (date-time) nestedVersion false Version nodeTemplates false object outputAttributes false object outputCapabilityProperties false object outputProperties false object substitutionMapping false SubstitutionMapping workflows false object workspace false string Map«string,PropertyDefinition» Map«string,List«PropertyConstraint»» ImplementationArtifact Name Description Required Schema Default archiveName false string archiveVersion false string artifactRef false string artifactRepository false string artifactType false string repositoryCredential false object repositoryName false string repositoryURL false string TopologyValidationResult Name Description Required Schema Default taskList false AbstractTask array valid false boolean warningList false AbstractTask array Requirement Name Description Required Schema Default properties false object type false string Map«string,Map«string,Set«string»»» NodeTemplate Name Description Required Schema Default artifacts false object attributes false object capabilities false object groups false string array interfaces false object name false string portability false object properties false object relationships false object requirements false object type false string TopologyDTO Name Description Required Schema Default archiveContentTree false TreeNode capabilityTypes false object dataTypes false object delegateType false string dependencyConflicts false DependencyConflictDTO array lastOperationIndex false integer (int32) nodeTypes false object operations false AbstractEditorOperation array relationshipTypes false object topology false Topology NodeType Name Description Required Schema Default abstract false boolean alienScore false integer (int64) archiveName false string archiveVersion false string artifacts false object attributes false object capabilities false CapabilityDefinition array creationDate false string (date-time) defaultCapabilities false string array derivedFrom false string array description false string elementId false string id false string interfaces false object lastUpdateDate false string (date-time) nestedVersion false Version portability false object properties false object requirements false RequirementDefinition array substitutionTopologyId false string tags false Tag array workspace false string Map«string,NodeGroup» Map«string,NodeType» AbstractTask Name Description Required Schema Default code false enum (IMPLEMENT, IMPLEMENT_RELATIONSHIP, REPLACE, SATISFY_LOWER_BOUND, PROPERTIES, HA_INVALID, SCALABLE_CAPABILITY_INVALID, NODE_FILTER_INVALID, WORKFLOW_INVALID, INPUT_ARTIFACT_INVALID, ARTIFACT_INVALID, LOCATION_POLICY, LOCATION_UNAUTHORIZED, LOCATION_DISABLED, ORCHESTRATOR_PROPERTY, INPUT_PROPERTY, NODE_NOT_SUBSTITUTED, FORBIDDEN_OPERATION) Map«string,Operation» Operation Name Description Required Schema Default dependencies false DeploymentArtifact array description false string implementationArtifact false ImplementationArtifact inputParameters false object portability false object SubstitutionMapping Name Description Required Schema Default capabilities false object requirements false object substitutionType false string Map«string,SubstitutionTarget» PropertyConstraint IValue Name Description Required Schema Default definition false boolean RestResponse«TopologyDTO» Name Description Required Schema Default data false TopologyDTO error false RestError Map«string,Set«string»» RestError Name Description Required Schema Default code false integer (int32) message false string AbstractPolicy Name Description Required Schema Default name false string type false string DataType Name Description Required Schema Default abstract false boolean archiveName false string archiveVersion false string creationDate false string (date-time) deriveFromSimpleType false boolean derivedFrom false string array description false string elementId false string id false string lastUpdateDate false string (date-time) nestedVersion false Version properties false object tags false Tag array workspace false string RestResponse«List«ILocationMatch»» Name Description Required Schema Default data false ILocationMatch array error false RestError Map«string,IValue» CapabilityType Name Description Required Schema Default abstract false boolean archiveName false string archiveVersion false string attributes false object creationDate false string (date-time) derivedFrom false string array description false string elementId false string id false string lastUpdateDate false string (date-time) nestedVersion false Version properties false object tags false Tag array validSources false string array workspace false string Map«string,Requirement» SubstitutionTarget Name Description Required Schema Default nodeTemplateName false string serviceRelationshipType false string targetId false string AbstractEditorOperation Name Description Required Schema Default author false string id false string previousOperationId false string Map«string,AbstractPropertyValue» PropertyDefinition Name Description Required Schema Default constraints false PropertyConstraint array default false PropertyValue«Topology» definition false boolean description false string entrySchema false PropertyDefinition password false boolean required false boolean suggestionId false string type false string Map«string,RelationshipType» CSARDependency Name Description Required Schema Default hash false string name false string version false string RestResponse«TopologyValidationResult» Name Description Required Schema Default data false TopologyValidationResult error false RestError Map«string,NodeTemplate» Map«string,CapabilityType» Map«string,DeploymentArtifact» Map«string,string» Tag Name Description Required Schema Default name false string value false string AbstractPropertyValue Name Description Required Schema Default definition false boolean Interface Name Description Required Schema Default description false string operations false object type false string NodeFilter Name Description Required Schema Default capabilities false object properties false object Location A location represents a cloud, a region of a cloud, a set of machines and resources.basically any location on which alien will be allowed to perform deployment. Locations are managed by orchestrators. Name Description Required Schema Default applicationPermissions false object creationDate false string (date-time) dependencies false CSARDependency array environmentPermissions false object environmentType false string groupPermissions false object id false string infrastructureType false string lastUpdateDate false string (date-time) metaProperties false object name false string orchestratorId false string userPermissions false object "},{"title":"Definitions","baseurl":"","url":"/documentation/1.4.0/rest/definitions_workspaces-api.html","date":null,"categories":[],"body":" Csar Name Description Required Schema Default definitionHash false string delegateId false string delegateType false string dependencies false CSARDependency array description false string hash false string id false string importDate false string (date-time) importSource false string license false string name false string nestedVersion false Version tags false Tag array templateAuthor false string toscaDefaultNamespace false string toscaDefinitionsVersion false string version false string workspace false string yamlFilePath false string Usage Name Description Required Schema Default resourceId false string resourceName false string resourceType false string workspace false string RestResponse«string» Name Description Required Schema Default data false string error false RestError RestResponse«PromotionRequest» Name Description Required Schema Default data false PromotionRequest error false RestError Map«string,List«Usage»» Map«string,Csar» CSARDependency Name Description Required Schema Default hash false string name false string version false string Map«string,Array«string»» RestResponse«List«Workspace»» Name Description Required Schema Default data false Workspace array error false RestError FilteredSearchRequest Name Description Required Schema Default filters false object from false integer (int32) query false string size false integer (int32) Version Name Description Required Schema Default buildNumber false integer (int32) incrementalVersion false integer (int32) majorVersion false integer (int32) minorVersion false integer (int32) qualifier false string CSARPromotionImpact Name Description Required Schema Default currentUsages false object hasWriteAccessOnTarget false boolean impactedCsars false object RestError Name Description Required Schema Default code false integer (int32) message false string PromotionRequest Name Description Required Schema Default csarName false string csarVersion false string id false string processDate false string (date-time) processUser false string requestDate false string (date-time) requestUser false string status false enum (INIT, ACCEPTED, REFUSED) targetWorkspace false string CreateTopologyRequest Name Description Required Schema Default description false string fromTopologyId false string name false string version false string workspace false string RestResponse«FacetedSearchResult» Name Description Required Schema Default data false FacetedSearchResult error false RestError Map«string,Array«FacetedSearchFacet»» RestResponse«CSARPromotionImpact» Name Description Required Schema Default data false CSARPromotionImpact error false RestError Tag Name Description Required Schema Default name false string value false string FacetedSearchResult Name Description Required Schema Default data false object array facets false object from false integer (int32) queryDuration false integer (int64) to false integer (int32) totalResults false integer (int64) types false string array Workspace Name Description Required Schema Default id false string name false string roles false enum (ADMIN, APPLICATIONS_MANAGER, ARCHITECT, COMPONENTS_MANAGER, COMPONENTS_BROWSER) array scope false enum (user, group, app, ALIEN_GLOBAL_WORKSPACE) "},{"title":"Deployment and portability","baseurl":"","url":"/documentation/1.4.0/concepts/deployment.html","date":null,"categories":[],"body":"The holy graal of your work in alien4cloud, deploying your target topology in any targeted infrastructure with the benefits of the TOSCA declarative model is of course the deployment of the target topology. In alien4cloud the deployment is done through the application environment. As an environment is associated with a version you will basically deploy the topology specified for the given version. However a topology in the editor may not contains enough information to be deployed, this is why the environment + version association will allow you to provide them. Topology inputs The first element that one can specify per environment for same topology are inputs, basically any properties that are environment specific and that should not be configured by the user that make the topology but actually by the user that will deploy it. Location choice As we have seen, the first concept of alien4cloud was orchestrators and Locations defined by the admin. As a responsible of a deployment you will have several locations available to you as configured by the admin, this may be all or a subset of alien4cloud’s available locations. For example for your deployment of a testing environment you may have the internal test OpenStack, Azure and Amazon available. While for your production you may have the internal VSphere and a set of physical machines (host-pool). Once you choosed the location you wish to use for your deployment you may proceed to the next phase: Node matching Node matching In a portable topology some of the node specified will be abstract meaning there is no implementation associated with them and so there is no way to actually run them. The most obvious example of an abstract node is the tosca.nodes.Compute node that actually represents a machine (either virtual machine or physical machine). Alien4cloud will automatically try to find a best-match and associate a location provided implementation for every of your abstract nodes. For example a tosca.nodes.Compute node on an Amazon location that is provided by cloudify3 orchestrator will be matched against an alien.cloudify.aws.nodes.Compute that will add some imageId and flavorId properties (that the admin may have configured for you). Note that there may be two different kind of matched nodes for an abstract node in your topology: On demand resources, that are provided by a given location. Services, that are provided by the admin or other applications and accessible on the location you choosed. On demand resources On demand resources are elements that will be created (or reserved in the special case of the host pool) for you dynamically when you deploy the application. They will also most of the time be released once the deployment is completed. They follow the same lifecycle as the deployment that is consuming them. The tosca.nodes.Compute matching to a VM is typically on an demand resource, you will create the VM when needed, i.e. when deploying your application and release it once completed. Services Services are running applications/components that may or may not be managed by alien4cloud and that provide some features required for other deployments. This may be a load-balancer service, a DNS service or a Data-Lake service that is reused by multiple applications working on it. The lifecycle and the ownership of the service is of course not the same as the lifecycle of the application that consume them. Still as a consumer you can match your abstract node on a service. And actually when you deploy an application you can turn it into a service so others may just consume it! Deployment Once all these configurations are done (it is faster than it may seem as matching is mostly automatic) you can just deploy your application and let alien4cloud do everything next. "},{"title":"Deployment Update","baseurl":"","url":"/documentation/1.4.0/orchestrators/cloudify4_driver/deployment_update.html","date":null,"categories":[],"body":"Updating a deployment is the operation of modifying a running topology by adding/removing/modifying nodes. It’s a kind of hot update. The deployment update feature of Cloudify is documented here . Obviously, we are not talking about some kind of magic here: when you upgrade a deployment, Cloudify will compare both topologies and will try to discover the way to go from the original to the updated one (add nodes that need to be added, remove thoses that need to be removed …). You can: add nodes add relationships changes properties (note that the node is not reinstalled when a property is updated, so the modification of a property will only impact added relationships that eventually refer to this updated property). rename nodes remove nodes remove relationships You can’t: change the type of a node change a target of a hostedOn relationship add/remove a node group add a scalable compute (as a consequence of the preceding point, since scaling is managed using scaling groups) add/remove a custom workflow Workflow & Lifecyle An important thing to note concerning deployment upgrade is that the task sequence will differ from a standard workflow (when the target topology is deployed from scratch): Cloudify use it’s own sequence that differs from TOSCA. when a node becomes the source of a relationship, the TOSCA workflow can’t be respected. In the following section, we will illustrate fews simple update scenarios and the resulting operation sequence call. Add a node A node nammed Source is hosted on a node named Host . After the update, a node named Target is also hosted on Host . Here is the sequence of operations that will be trigerred during deployment update: Node Operation Tiers Target create Host pre_configure_target Target Target pre_configure_source Host Target configure Host post_configure_target Target Target post_configure_source Host Target start Host add_source Target Target add_target Host Add a relationship A node nammed Source and another named Target are hosted on a node named Host . After update, a relationship link Source to Target . Here is the sequence of operations that will be trigerred during deployment update: Node Operation Tiers Source add_target Target Target add_source Source Add a component and a relationship (new node is target of the relationship) A node nammed Source is hosted on a node named Host . After the update, a node named Target is also hosted on Host and a relation is weaved between Source and Target . Here is the sequence of operations that will be trigerred during deployment update: Node Operation Tiers Target create Host pre_configure_target Target Target pre_configure_source Host Target configure Host post_configure_target Target Target post_configure_source Host Target start Host add_source Target Target add_target Host Source add_target Target Target add_source Source Add a component and a relationship (new node is source of the relationship) A node nammed Target is hosted on a node nammed Host . After the update, a node named Source is also hosted on Host and a relation is weaved between Source and Target . Here is the sequence of operations that will be trigerred during deployment update: Node Operation Tiers Source create Host pre_configure_target Source Source pre_configure_source Host Source pre_configure_source target Target pre_configure_target Source source configure Host post_configure_target Source Source post_configure_source Host Source post_configure_source Target Target post_configure_target Source Source start Host add_source Source Source add_target Host Target add_source Source Source add_target Target Remove a node A node nammed Source is hosted on a node named Host . After the update, the node named Source is removed. Here is the sequence of operations that will be trigered during deployment update: Node Operation Tiers Source stop Host remove_source Source Source remove_target Host Source delete Remove a relationship A node nammed Source is hosted on a node named Host . Another node Target is also hosted on Host and linked to Source by a connectTo relationship. After the update, the relationship between Source and Target is removed. No operations are called, this seems to be a bug ! Remove a node that is source of a relationship A node nammed Source is hosted on a node named Host . Another node Target is also hosted on Host and linked to Source by a connectTo relationship. After the update, the node Source and the relationship are removed. Here is the sequence of operations that will be trigerred during deployment update: Node Operation Tiers Source stop Host remove_source Source Source remove_target Host Source remove_target Target Target remove_source Source Source delete Remove a node that is target of a relationship A node nammed Source is hosted on a node named Host . Another node Target is also hosted on Host and linked to Source by a connectTo relationship. After the update, the node Target and the relationship are removed. Here is the sequence of operations that will be trigerred during deployment update: Node Operation Tiers Source remove_target Target Target remove_source Source Target stop Host remove_source Target Target remove_target Host Target delete Renaming a node Renaming a node is just like removing it and adding another (with the new name). A node nammed Source is hosted on a node named Host . After the update, the node Source is renamed to Renamed . Here is the sequence of operations that will be trigerred during deployment update: Node Operation Tiers Renamed create Host pre_configure_target Renamed Renamed pre_configure_source Host Renamed configure Host post_configure_target Renamed Renamed post_configure_source Host Renamed start Host add_source Renamed Renamed add_target Host Source stop Host remove_source Source Source remove_target Host Source delete Adding a node, removing another A node nammed Source is hosted on a node named Host . After the update, the node Source is removed, and a node Target is added. Here is the sequence of operations that will be trigerred during deployment update: Node Operation Tiers Target create Host pre_configure_target Target Target pre_configure_source Host Target configure Host post_configure_target Target Target post_configure_source Host Target start Host add_source Target Target add_target Host Source stop Host remove_source Source Source remove_target Host Source delete "},{"title":"Deployment Update","baseurl":"","url":"/documentation/1.4.0/orchestrators/cloudify3_driver/deployment_update.html","date":null,"categories":[],"body":"Updating a deployment is the operation of modifying a running topology by adding/removing/modifying nodes. It’s a kind of hot update. The deployment update feature of Cloudify is documented here . Obviously, we are not talking about some kind of magic here: when you upgrade a deployment, Cloudify will compare both topologies and will try to discover the way to go from the original to the updated one (add nodes that need to be added, remove thoses that need to be removed …). You can: add nodes add relationships changes properties (note that the node is not reinstalled when a property is updated, so the modification of a property will only impact added relationships that eventually refer to this updated property). rename nodes remove nodes remove relationships You can’t: change the type of a node change a target of a hostedOn relationship add/remove a node group add a scalable compute (as a consequence of the preceding point, since scaling is managed using scaling groups) add/remove a custom workflow Workflow & Lifecyle An important thing to note concerning deployment upgrade is that the task sequence will differ from a standard workflow (when the target topology is deployed from scratch): Cloudify use it’s own sequence that differs from TOSCA. when a node becomes the source of a relationship, the TOSCA workflow can’t be respected. In the following section, we will illustrate fews simple update scenarios and the resulting operation sequence call. Add a node A node nammed Source is hosted on a node named Host . After the update, a node named Target is also hosted on Host . Here is the sequence of operations that will be trigerred during deployment update: Node Operation Tiers Target create Host pre_configure_target Target Target pre_configure_source Host Target configure Host post_configure_target Target Target post_configure_source Host Target start Host add_source Target Target add_target Host Add a relationship A node nammed Source and another named Target are hosted on a node named Host . After update, a relationship link Source to Target . Here is the sequence of operations that will be trigerred during deployment update: Node Operation Tiers Source add_target Target Target add_source Source Add a component and a relationship (new node is target of the relationship) A node nammed Source is hosted on a node named Host . After the update, a node named Target is also hosted on Host and a relation is weaved between Source and Target . Here is the sequence of operations that will be trigerred during deployment update: Node Operation Tiers Target create Host pre_configure_target Target Target pre_configure_source Host Target configure Host post_configure_target Target Target post_configure_source Host Target start Host add_source Target Target add_target Host Source add_target Target Target add_source Source Add a component and a relationship (new node is source of the relationship) A node nammed Target is hosted on a node nammed Host . After the update, a node named Source is also hosted on Host and a relation is weaved between Source and Target . Here is the sequence of operations that will be trigerred during deployment update: Node Operation Tiers Source create Host pre_configure_target Source Source pre_configure_source Host Source pre_configure_source target Target pre_configure_target Source source configure Host post_configure_target Source Source post_configure_source Host Source post_configure_source Target Target post_configure_target Source Source start Host add_source Source Source add_target Host Target add_source Source Source add_target Target Remove a node A node nammed Source is hosted on a node named Host . After the update, the node named Source is removed. Here is the sequence of operations that will be trigered during deployment update: Node Operation Tiers Source stop Host remove_source Source Source remove_target Host Source delete Remove a relationship A node nammed Source is hosted on a node named Host . Another node Target is also hosted on Host and linked to Source by a connectTo relationship. After the update, the relationship between Source and Target is removed. No operations are called, this seems to be a bug ! Remove a node that is source of a relationship A node nammed Source is hosted on a node named Host . Another node Target is also hosted on Host and linked to Source by a connectTo relationship. After the update, the node Source and the relationship are removed. Here is the sequence of operations that will be trigerred during deployment update: Node Operation Tiers Source stop Host remove_source Source Source remove_target Host Source remove_target Target Target remove_source Source Source delete Remove a node that is target of a relationship A node nammed Source is hosted on a node named Host . Another node Target is also hosted on Host and linked to Source by a connectTo relationship. After the update, the node Target and the relationship are removed. Here is the sequence of operations that will be trigerred during deployment update: Node Operation Tiers Source remove_target Target Target remove_source Source Target stop Host remove_source Target Target remove_target Host Target delete Renaming a node Renaming a node is just like removing it and adding another (with the new name). A node nammed Source is hosted on a node named Host . After the update, the node Source is renamed to Renamed . Here is the sequence of operations that will be trigerred during deployment update: Node Operation Tiers Renamed create Host pre_configure_target Renamed Renamed pre_configure_source Host Renamed configure Host post_configure_target Renamed Renamed post_configure_source Host Renamed start Host add_source Renamed Renamed add_target Host Source stop Host remove_source Source Source remove_target Host Source delete Adding a node, removing another A node nammed Source is hosted on a node named Host . After the update, the node Source is removed, and a node Target is added. Here is the sequence of operations that will be trigerred during deployment update: Node Operation Tiers Target create Host pre_configure_target Target Target pre_configure_source Host Target configure Host post_configure_target Target Target post_configure_source Host Target start Host add_source Target Target add_target Host Source stop Host remove_source Source Source remove_target Host Source delete "},{"title":"TOSCA usage guide","baseurl":"","url":"/documentation/1.4.0/devops_guide/dev_ops_guide.html","date":null,"categories":[],"body":" This section contains reference to the TOSCA Simple profile in YAML specification as it is now supported in Alien. TOSCA is a standard specification that allow dev_ops and architects to define reusable components and topologies that can be easily ported across clouds and orchestrators. Alien 4 Cloud is designed so you can easily add your own components and leverage your existing scripts, puppet or chef recipes, using the TOSCA YAML based DSL. TOSCA Alien 4 Cloud is compliant with OASIS’s TOSCA standard to model it’s different components (nodes, relationships, capabilities and requirements). In order to define components in TOSCA you can use the XML or YAML profile (TOSCA Simple Profile). We recommend using the simple profile and thus this documentation describe only the way to configure elements using the simple profile. Tosca support in ALien 4 Cloud 1.4.0 Alien 4 Cloud only supports TOSCA Simple Profile in YAML, XML version is discontinuated and not supported by OASIS TOSCA TC and people still using it should migrate. Alien 4 Cloud 1.4.0 is very close to TOSCA 1.0.0 but still has a few differences. Note that Simple Profile 1.0 specification will soon be released as a TOSCA standard however the standard test-suites have not been written yet. Known differences This section details the differences between TOSCA Simple Profile and Alien 4 Cloud 1.4.0 dsl: imports : alien4cloud imports are based on archive name and version rather than url or relative paths. We think that this is a better way to reference artifacts and to increase portability. Most of great tools support this kind of referencing (maven, bower, node etc.). Note that we plan to support TOSCA notation in the future but keep extended support of version notation (hopefully change the standard to include it). get_artifact : We don’t support get_artifact function in alien4cloud currently but rather provide an environment variable with the name of the artifact that provide the local path of the file. valid_source_types is not supported in a capability as we don’t think that this is a good practice as it limit the ability to create a new node that could connect to multiple services (there is no multiple inheritance but multiple requirements/capabilites). In addition this doesn’t bring real value except saving 2 yaml lines to create a new capability. attach_to relationships direction has changed in TOSCA from the working draft to latest release and we still support the previous version basically BlockStorage is the source and Compute the target which actually sounds to make more sense to us. We plan to support parsing/export to TOSCA by reverting the relation at parsing time. range type alien4cloud currently don’t support the range type primitive. attributes on capabilites are not yet parsed in alien4cloud. data types on attributes is not yet supported network we don’t support yet the TOSCA network types but have a simplified support for network definitions. group_types we don’t support group types as they can define operation but the impact of their operations on the workflow is not defined. Note that there is actually no group interface in TOSCA. We have a group support on node templates but this is currently used to assign policies (like H.A.). policy_types While some policies are supported in alien4cloud they are supported through group and are not flexible. Note that before the TOSCA Simple profile 1.1 version policies where experimental as the definition syntax has quite changed. interface_types TOSCA Simple Profile working draft had no interface_types and the interfaces where defined directly on the node types or relationship types. Alien 4 Cloud is compliant with the working drafts that provide simpler notation. implementation artifacts type Alien 4 Cloud currently relies on file extensions to automatically find the type of an implementation artifact which is very efficient in the simple definition notation. TOSCA however allow users to specify explicitly the artifact type. metadata Metadata was added in the late versions of the specification. Alien4Cloud currently supports tags which is similar to metadata but applies to the nodes types also. We are planning to work with the TOSCA TC to allow metadata on types and templates. Declarative workflow differences Declarative workflow generation is the main difference between the TOSCA Simple Profile and Alien 4 Cloud. DSL extentions tags As stated we don’t support metadata but provide support for tags element which is similar but applies to all type elements. workflow Alien 4 Cloud support the definition of imperative workflows. TOSCA Simple Profile 1.0 doesn’t provide support for imperative workflows but we have pushed this into the 1.1 specification. Note that we however don’t support the 1.1 workflow specification that has just been defined and allow some more advanced options than the version we currently support. "},{"title":"ALIEN 4 Cloud","baseurl":"","url":"/common/download.html","date":null,"categories":[],"body":" Community edition Alien 4 Cloud 1.4.3.2 Alien 4 Cloud latest sprint milestone Alien 4 Cloud latest build Old stable versions 1.4.0 Premium edition Alien 4 Cloud Premium 1.4.3.2 Alien 4 Cloud Premium latest sprint milestone Alien 4 Cloud Premium latest build Old stable versions 1.4.0 Need and account for Premium Download ? Contact us Get Started "},{"title":"Entry schema","baseurl":"","url":"/documentation/1.4.0/devops_guide/tosca_grammar/entry_schema.html","date":null,"categories":[],"body":"Entry schema is used in the context of a property definition to specify the type of the entry when the property definition type is list or map. Keynames Keyname Required Type Description tosca_definitions_version type yes string The required data type for the entries of the map or list. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 description no string The optional description for the entry type. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 constraints no list of constraints The optional list of sequenced constraints to add to the data type. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 Constraints behavior Adding constraints to an entry schema actually allows to augment a data type without having to create a new derived type. Constraints can only be applied when the type of the entry schema is a primitive type. Grammar entry_schema : description : <schema_description> type : <entries_type> constraints : - <entries_constraint_1> - ... - <entries_constraint_n> "},{"title":"ALIEN 4 Cloud","baseurl":"","url":"/common/features.html","date":null,"categories":[],"body":" Alien4Cloud is OpenSource and you can download from the download link in the navbar! You can also contact us to benefit from the Premium version with great features and support for your enterprise: Open source Premium SELF-SERVICE Role based portal & Self-service deployment Release pipeline / Versions, environments management Customization: Open Source implementation (Algorithms/UI) Integration with Dev/Ops systems (REST API) Metrics & Analytics: Infrastructure Resources Dashboards, Audit Trails DESIGN APPLICATIONS (BUILD) Application Portability & Reusability using Tosca Standard Tosca support: Simple profile in YAML v1.0 / alien 4 cloud 1.3 DSL Components and blueprints Catalog (Drag & Drop, Git Archive import) Users and applications workspaces Full featured blueprint designer: Drag & drop, Custom Workflow, Advanced Git Integration Artifact repositories management (Http, Maven, Git) basic (http) Infrastructure targets and resources management Portability insights at the topology & component level DEPLOY & MANAGE (RUN) IaaS resources matching Reuse your existing artifacts: Shell, Chef, Puppet, Ansible... Interface with any deployment platform, orchestrators, custom basic - Display deployment information from underlying orchestrator - Cloudify v3.3.1 certification Deployment on any IaaS: Virtual, Physical, private, public or Hybrid basic Certifications: - Amazon (AWS) - OpenStack - Microsoft Azure - VMWare VSphere Containers/Docker support (Mesos & Kubernetes) Runtime view (while deploying or running) basic Runtime Policies: High availability and Scalability UPGRADES Patch & Post deployment execution Alien migration tools SECURITY & NFR Authentication, LDAP Integration Manage SSH Keys Associate existing Security Groups to Compute SAML integration o Alien4Cloud High Availability SUPPORT Online Documentation, product updates Response SLA time, Support Portal & knowledge Base, Direct access to core developers, Hot patches, fixes Updates and new premium features (Alien & Cloudify), Indemnification, Priority on new features development "},{"title":"Function definition","baseurl":"","url":"/documentation/1.4.0/devops_guide/tosca_grammar/function_definition.html","date":null,"categories":[],"body":"Work in progress, details the functions that can be applied to properties and function input parameters. A function definition defines a named function to evaluate at runtime, and that can be used as property, attribute or input parameter. It is used to dynamically retrieve a value from property definition defined on an entity. Reserved function keywords The following keywords may be used in some TOSCA function in place of a TOSCA Node or Relationship Template name. They will be interpreted when evaluation the function at runtime. Keyword Valid context Description SELF Node Template or Relationship Template Node or Relationship Template instance that contains the function at the time the function is evaluated. SOURCE Relationship Template only Node Template instance that is at the source end of the relationship that contains the referencing function. TARGET Relationship Template only Node Template instance that is at the target end of the relationship that contains the referencing function. HOST Node Template only Node that “hosts” the node using this reference (i.e., as identified by its HostedOn relationship). Supported functions in Alien4Cloud are: get_property get_attribute get_operation_output concat "},{"title":"get_attribute","baseurl":"","url":"/documentation/1.4.0/devops_guide/tosca_grammar/get_attribute_definition.html","date":null,"categories":[],"body":"The get_attribute function is used to retrieve the values of named attributes declared by the referenced node or relationship template name. Use this function for inputs parameters. Keyname Type Required Description modelable_entity_name string yes The required name of a modelable entity (e.g., Node Template or Relationship Template name) as declared in the service template that contains the named property definition the function will return the value from.Can be one of the reserved keywords: SELF, SOURCE, TARGET, HOST attribute_name string yes Name of the attribute definition the function will return the value from. Grammar get_attribute : [ <modelable_entity_name | SELF | SOURCE | TARGET | HOST> , <attribute_name> ] Example The following example shows how to define an input parameter on relationship using get_attribute function: relationship_types : fastconnect.relationship.FunctionSample interfaces : configure : add_target : inputs : TARGET_IP : { get_attribute : [ TARGET , ip_address ] } implementation : add_target.sh "},{"title":"get_input","baseurl":"","url":"/documentation/1.4.0/devops_guide/tosca_grammar/get_input.html","date":null,"categories":[],"body":"The get_input function is used to retrieve the values of properties declared within the inputs section of a TOSCA Service Template. Grammar get_input : <input_property_name> Example inputs : cpus : type : integer node_templates : my_server : type : tosca.nodes.Compute properties : num_cpus : { get_input : cpus } "},{"title":"get_operation_output","baseurl":"","url":"/documentation/1.4.0/devops_guide/tosca_grammar/get_operation_output_definition.html","date":null,"categories":[],"body":"The get_operation_output function is used to retrieve the values of variables exposed / exported from an interface operations. Use this function for inputs parameters and/or attributes. Keynames Keyname Type Required Description modelable_entity_name string yes The required name of a modelable entity (e.g., Node Template or Relationship Template name) as declared in the service template that implements the named interface and operation. Can be one of the reserved keywords: SELF, SOURCE, TARGET, HOST interface_name string yes The required name of the interface which defines the operation. operation_name string yes The required name of the operation whose output value we would like to retrieve. output_variable_name string yes The required name of the output variable that is exposed / exported by the operation. Grammar get_operation_output : [ <modelable_entity_name | SELF | SOURCE | TARGET | HOST> , <interface_name> , <operation_name> , <output_variable_name> ] Example The following example shows how to define an attribute and an input parameter using get_operation_output function: node_types : fastconnect.nodes.FunctionSample : attributes : port : { get_operation_output : [ SELF , Standard , configure , bound_port ]} interfaces : Standard : configure : config.sh #the config.sh script should expose an environment variable (output) named \"bound_port\" start : inputs : PORT : { get_operation_output : [ SELF , Standard , configure , bound_port ]} implementation : start.sh "},{"title":"get_property","baseurl":"","url":"/documentation/1.4.0/devops_guide/tosca_grammar/get_property_definition.html","date":null,"categories":[],"body":"The get_property function is used to retrieve property values between modelable entities defined in the same service template. Use this function for inputs parameters. Keynames Keyname Type Required Description modelable_entity_name string yes The required name of a modelable entity (e.g., Node Template or Relationship Template name) as declared in the service template that contains the named property definition the function will return the value from.Can be one of the reserved keywords: SELF, SOURCE, TARGET, HOST capability_name string no The optional name of a capability within the modelable entity that contains the named property definition the function will return the value from. property_path string yes Name (of) or path (to) the property definition the function will return the value from. can be a nested names such as: property_name.nested_property Grammar get_property : [ <modelable_entity_name | SELF | SOURCE | TARGET | HOST> , [ <capability_name> ], <property_path> ] Example Given a custom data type definition: alien.nodes.test.ComplexDataType : properties : nested : type : string nested_array : type : list entry_schema : type : string nested_map : type : map entry_schema : type : string A node type definition: alien.nodes.test.FunctionTest : derived_from : tosca.nodes.SoftwareComponent properties : myName : type : string complex_prop : type : alien.nodes.test.ComplexDataType interfaces : Standard : create : inputs : MY_NAME : { get_property : [ SELF , myName ] } COMPLEX : { get_property : [ SELF , \"complex_prop\" ] } NESTED : { get_property : [ SELF , \"complex_prop.nested\" ] } NESTED_ARRAY_ELEMENT : { get_property : [ SELF , \"complex_prop.nested_array[0]\" ] } NESTED_MAP_ELEMENT : { get_property : [ SELF , \"complex_prop.nested_map.tutu\" ] } CAPA_PORT : { get_property : [ SELF , endpoint , port ] } implementation : scripts/create.sh And the following topology snippet: FunctionTest : type : alien.nodes.test.FunctionTest properties : myName : functionTest_Name complex_prop : nested : toto nested_array : [ titi , tuctuc ] nested_map : toctoc : tactac tutu : tata capabilities : endpoint : properties : port : 80 The following environment var will be available to sript: # simple property echo \"MY_NAME is ${MY_NAME}\" # capability property echo \"CAPA_PORT is ${CAPA_PORT}\" #complex property. Will results into the json serialization of the property value echo \"COMPLEX is ${COMPLEX}\" # nested properties echo \"NESTED is ${NESTED}\" # first element of the array nested property \"nested_array\" echo \"NESTED_ARRAY_ELEMENT is ${NESTED_ARRAY_ELEMENT}\" # nested property of the map nested property \"nested_map\" echo \"NESTED_MAP_ELEMENT is ${NESTED_MAP_ELEMENT}\" Output: MY_NAME is functionTest_Name CAPA_PORT is 80 COMPLEX is { \"nested\" : \"toto\" , \"nested_array\" : [ \"titi\" , \"tuctuc\" ] , \"nested_map\" : { \"toctoc\" : \"tactac\" , \"tutu\" : \"tata\" }} NESTED is toto NESTED_ARRAY_ELEMENT is titi NESTED_MAP_ELEMENT is tata "},{"title":"Getting started with a TOSCA component","baseurl":"","url":"/documentation/1.4.0/devops_guide/custom_types/getting_started_with_tosca_component.html","date":null,"categories":[],"body":" In this section, we will details our JDK component that can be found in our github: JDK Cloud Service Archive (CSAR) A Cloud Service Archive (CSAR) is a folder or a zip file that contains types and templates definitions and any other files required for elements implementations. The structure of our JDK’s CSAR is the following: ├── images │ ├── jdk.png ├── scripts │ └── install_jdk.sh ├── jdk-type.yml jdk-type.yml is the TOSCA file which contains all the TOSCA definitions The folders /images and /scripts contain files which are referenced by the jdk-type.yml file. A TOSCA file can be written in XML or YAML. Here we choose to use YAML because this is the format recognized by Alien4Cloud. More details about CSAR here The TOSCA file structure The structure is the following: tosca_definitions_version : # Required TOSCA Definitions version string description : # Optional short description of the definitions inside the file template_name : # Optional name of this service template template_version : # Optional version of this service template template_author : # Optional author of this service template imports : # list of import statements for importing other definitions files node_types : # list of node type definitions capability_types : # list of capability type definitions relationship_types : # list of relationship type definitions More details on the TOSCA file definition here The basic part tosca_definitions_version : tosca_simple_yaml_1_0_0_wd03 description : TOSCA simple profile with JDK. template_name : jdk-type template_version : 1.0.0-SNAPSHOT template_author : FastConnect imports : - tosca-normative-types:1.0.0-SNAPSHOT A little explanation: tosca_simple_yaml_1_0_0_wd03 is the TOSCA version which Alien will use to parse the file. tosca-normative-types:1.0.0-SNAPSHOT means that our JDK component has a dependency to the TOSCA normative which is defined by another CSAR with the following value in its TOSCA file. Alien4Cloud comes with a default version of the normative types inside its catalog. Make sure that it matches the version you specify in your TOSCA file. Otherwise, you can import the needed version from our github Main part: node_types The JDK component has 2 node types: alien.nodes.JDK and alien.nodes.JavaSoftware . alien.nodes.JDK is the node which is responsible to install the JDK alien.nodes.JavaSoftware is an abstract node to be extended by softwares which require a JDK. The node_type structure Here a description of a node_type structure used by the jdk-type.yml file: <node_type_name> : # Define the name of the node type abstract : # Optional boolean to specify it’s an abstract node derived_from : # Optional parent node type name the node derives from description : # Optional description tags : # Optional key/value map to assign your own metadata to the node # A default “icon” key is recognize by Alien4Cloud to associate an image to the node properties : # Optional list of property definitions attributes : # Optional list of attribute definitions requirements : # Optional sequenced list of requirement definitions capabilities : # Optional list of capability definitions interfaces : # Optional list of named interfaces More details about node type definition here The alien.nodes.JDK node_type in details node_types : alien.nodes.JDK : derived_from : tosca.nodes.SoftwareComponent description : > Installation of JDK tags : icon : images/jdk.png properties : java_url : type : string required : true default : \"http://download.oracle.com/otn-pub/java/jdk/7u75-b13/jdk-7u75-linux-x64.tar.gz\" java_home : type : string required : true default : \"/opt/java\" attributes : java_version : { get_operation_output : [ SELF , Standard , create , JAVA_VERSION ] } java_message : { concat : [ \"Java help: \" , get_operation_output : [ SELF , Standard , create , JAVA_HELP ]] } capabilities : jdk : type : alien.capabilities.JDK occurrences : [ 0 , unbounded ] interfaces : Standard : create : inputs : JAVA_URL : { get_property : [ SELF , java_url ] } JAVA_HOME : { get_property : [ SELF , java_home ] } implementation : scripts/install_jdk.sh derived_from: tosca.nodes.SoftwareComponent The alien.nodes.JDK node type is derived from the TOSCA native node tosca.nodes.SoftwareComponent , the root type defined by TOSCA to define software components. icon: images/jdk.png The node will use the image which can be found at images/jdk.png inside the CSAR archive. properties: The node has 2 properties which will be used by the installation script. attributes: The node defines 2 attributes to be shown at runtime. capabilities: The node expose a jdk capability to provide relationship of type alien.capabilities.JDK (defined later in the file). It basically says that any nodes that require an alien.capabilities.JDK type can be linked to this node. interfaces: Defines operations on the node. By default, every TOSCA nodes has an implicit default lifecycle composed of several operations which are: create , configure , start , stop and delete . Here we only define the create operation which calls the install_jdk.sh script inside the CSAR archive. The install_jdk.sh The script installs a jdk on a Linux machine given a tarball archive and a target folder on the machine. What is important to focus on is the inputs definition. inputs: JAVA_URL: { get_property: [ SELF, java_url ] } JAVA_HOME: { get_property: [ SELF, java_home ] } So we have 2 inputs are specified in the yaml file. The values are retrieved from the node’s properties using the function get_property . The names JAVA_URL and JAVA_HOME , defined as key name of the inputs properties, are passed as variables environment before calling the install_jdk.sh script. Therefore, inputs inside the script are simply called as normal variables. echo \"${currHostName}:${currFilename} Java url ${JAVA_URL}\" echo \"${currHostName}:${currFilename} Java home ${JAVA_HOME}\" Another point to highlight is that it is a best practice to correctly handle errors and especially the exit code of your scripts thus the orchestrator leveraged by Alien4Cloud can correctly handle errors and notify that a node has failed. Here we used a function to add an error message, but it is not mandatory. error_exit () { echo \"${currHostName}:${currFilename} $2 : error code: $1\" exit $1 } The alien.nodes.JavaSoftware node_type in details alien.nodes.JavaSoftware : abstract : true derived_from : tosca.nodes.Root description : The Alien JavaSoftware node represents a generic software component that can be launch by Java. tags : icon : images/jdk.png requirements : - java : alien.capabilities.JDK relationship : alien.relationships.JavaSoftwareHostedOnJDK abstract: true This node is an abstract one, meaning that other nodes can extend it in order to inherit all of its definition. requirements: This node requires to be linked to a node which offers the alien.capabilities.JDK capability type with a relationship of type alien.relationships.JavaSoftwareHostedOnJDK (defined later in the file). Defining capability_types Here the structure of the capability_type used in the jdk-type.yml file: capability_types : <capability_type_name> : # The name of the capability type derived_from : # Optional parent Capability Type name the Capability Type derives from The jdk-type.yml file defines a alien.capabilities.JDK type and it is derived from the tosca.capabilities.Container TOSCA normative type. capability_types : alien.capabilities.JDK : derived_from : tosca.capabilities.Container More details about the capability definition here Defining relationship_types Here the structure of the relationship_type used in the jdk-type.yml file: <relationship_type_name> : # The name of the relationship type derived_from : # Optional parent Relationship Type name the Relationship Type derives from description : # Optional description valid_sources : # Optional list of one or more valid target entities or entity types (i.e., a Node Types or Capability Types). valid_target_types : # Required list of one or more valid target entities or entity types (i.e., a Node Types or Capability Types). In the source file: alien.relationships.JavaSoftwareHostedOnJDK : derived_from : tosca.relationships.HostedOn description : Relationship use to describe that the SoftwareComponent is hosted on the JDK. valid_sources : [ tosca.nodes.JavaSoftware ] valid_target_types : [ alien.capabilities.JDK ] alien.relationships.JavaSoftwareHostedOnJDK is the name of our relationship type. derived_from: tosca.relationships.HostedOn The relationship derives from the tosca.relationships.HostedOn TOSCA normative type. valid_sources: [ tosca.nodes.JavaSoftware ] The source of the relationship must be of a node with a tosca.nodes.JavaSoftware type in its requirements list. valid_target_types: [ alien.capabilities.JDK ] The target of the relationship must be of a node with an alien.capabilities.JDK type in its capabilities list. More details about the relationship type here Next steps Going deeper writing TOSCA components LAMP Stack Tutorial Create your own components Going deeper with TOSCA Upload your CSAR into Alien4Cloud "},{"title":"Launch Cloudify on AWS","baseurl":"","url":"/documentation/1.4.0/getting_started/going_further_cfy_boot.html","date":null,"categories":[],"body":" THIS SECTION IS BEING WRITTEN AND IS NOT YET COMPLETED. In this first going further tutorial we will explain how you can bootstrap an open-source TOSCA orchestrator that is a bit more heavy than Puccini and that we currently recommend for production environments. This orchestrator is cloudify. Some features that are not yet supported by puccini are supported in Cloudify so this will probably be your choice for more advanced scenarios with alien4cloud. But first of all let’s look how to bootstrap it. Bootstraping cloudify To bootstrap cloudify on Amazon we will actually leverage puccini as it provides support for every features requested to actually launch a TOSCA recipe that describe a cloudify orchestrator. Security groups configuration As a pre-requisite we will configure Security Groups for our cloudify manager so he can communicate with it’s other components. cfy_manager_agents : Security group to open the ports on the manager machine so agents can communicate to it. Type Protocol Port Range Source Custom TCP Rule TCP 53229 cfy_agents Custom TCP Rule TCP 53333 cfy_agents Custom TCP Rule TCP 5671 cfy_agents Custom TCP Rule TCP 8101 cfy_agents cfy_agents : Security group to use on every agent machine so the manager can communicate with the agent machine. Type Protocol Port Range Source SSH TCP 22 cfy_manager_agents WinRM-HTTPS TCP 5986 cfy_manager_agents cfy_manager_ssl_client : Security group so that clients can access the cloudify manager on ssl (we will configure a secured manager) Type Protocol Port Range Source SSH TCP 22 0.0.0.0/0 HTTPS TCP 443 0.0.0.0/0 cfy_manager_cluster : Security group to cluster the cloudify manager Type Protocol Port Range Source Custom TCP Rule TCP 8300 0.0.0.0/0 Custom TCP Rule TCP 15432 0.0.0.0/0 Custom TCP Rule TCP 22000 0.0.0.0/0 Custom UDP Rule UDP 8301 0.0.0.0/0 Custom TCP Rule TCP 8301 0.0.0.0/0 Custom TCP Rule TCP 8500 0.0.0.0/0 Configure puccini to deploy on aws In the basic getting started we auto-configured a local docker location for you so you can match compute nodes on local docker containers. We want to deploy the cloudify manager on Amazon, in order to do so we will go to the admin section and configure a puccini location. But first there is a configuration file to manually edit in order to configure aws support in puccini. Go to the alien4cloud-getstarted/puccini-cli-1.4.0-SNAPSHOT/conf/providers/aws/default folder and copy the provider.conf.tpl file to provider.conf then edit the file to fill in your access_key_id, access_key_secret and region. In alien4cloud ui got to , then . You should see the list of orchestrators with a single Puccini simple orchestrator orchestrator, select it. Go to and then click on . Click on to open the on demand resources configuration tab and drag-and-drop a org.alien4cloud.puccini.aws.nodes.Instance to the left section. The cloudify types requires a red hat or centos host, in order to allow puccini deploy on these OS you need to configure a cloud init to remove the requiretty option. These option was added in red hat operating system for Security reasons, has been recognized as inefficient and is removed from latests version. It prevent non-interactive ssh connections as the puccini orchestrator is doing. Edit the user_data field that will setup the VM cloud init using the following code: #cloud-config runcmd: - echo 'Defaults:ec2-user !requiretty' > /etc/sudoers.d/999-puccini-cloud-init-requiretty Import cloudify TOSCA types You have learned in the getting started how to import archives from git so import the cloudify types archive from: * https://github.com/alien4cloud/samples/tree/master/cloudify-types Configure your topology Drag and drop a Compute node, dra Configure your matching information "},{"title":"Alien High Availability","baseurl":"","url":"/documentation/1.4.0/admin_guide/ha.html","date":null,"categories":[],"body":" Premium feature This section refers to a premium feature. When deploying A4C in a production environment, you may want to be sure it will be available 24/7, even in case of crashes. We provide a plugin that manages high availability for A4C using a primary/backup mode. Note that this page focus only on A4C HA: we don’t consider HA for orchestrators components (managers …) in this page. Architecture Our HA solution is based on a primary/backup mecanism: you’ll need to deploy several instances of A4C to ensure one will be available at a given time. The alien4cloud-premium-ha plugin leverages on Consul features: - Key/Value Storage: a distrubuted key/value is used to determine which A4C instance is the leader. - Failure Detection: Consul is in charge of checking the liveness of A4C instances. As a consequence, to use this plugin you will need a Consul server (but you’ll probably use a consul cluster !). Since A4C use ElasticSearch as a backend server, you’ll need to setup a remote server (instead of launching an embedded one), and you’ll probably prefer to setup an ElasticSearch cluster with replicated nodes. Since A4C use local file system to store stuffs (plugins, csars, images), you’ll need a distibuted (and eventually replicated, or at least backed up) file system. Finally, you’ll probably want to setup a reverse proxy behind your A4C instances to have a single entry point for the application. So the whole architecture could look like this: How A4C works in HA mode In a very first stage, A4C will start in backup mode, which is a limited mode, with only a ligthweight bootstrap context started: - Not all REST endpoints will be avalaible. Basically, only the health check endpoint will be available. - All plugins are disabled (for instance, orchestrator are not enabled, so no background threads will run). At this early stage, A4C is not usable. A4C will then open a session onto consul and try to acquire a lock onto a consul key. If the lock is already acquired (by another instance), it will still in this boostrap mode and will wait for changes on this key. A health check is associated with the sesssion, so consul will check for the liveness of this A4C instance. When the lock is acquired, this means the current instance is elected as the leader. The whole application context is started, all REST endpoint are available and all the stuffs are waked up. This A4C instance is then fully usable. If the JVM or the machine crash (or event if an A4C instance can’t reach ElasticSeach), the health check will fail, consul will disable the session, and the lock (if it is associated with this session) will be released. The primary will fall back in backup mode. Another instance will be elected. Installation Sample topology As part of our plugin, you’ll find TOSCA types and topology that can help you to setup such kind of architecure. You can use it as an example but keep in mind it is not intended to be production ready. Few notes concerning this topology: The AlienCompute which hosts A4C is scalable and you should have at least 2 instances. Can be scaled at runtime. The BackendCompute which hosts ElasticSearch is scalable and you should have at least 2 instances. Can’t be scaled at runtime. The ConsulCompute is scalable and a good number is 3 for the minimum instances count (for quorum requirements). Can be scaled at runtime. We use a local Consul agent on each A4C host. This agent is integrated in the Consul cluster (and so knows all the members), so A4C just need to talk to this agent (and don’t have to manage fail over in case of crash of a member of the cluster servers). We use a Samba server to manage a distributed file system. This is just an example, you can use whatever you want (NFS, sshfs …) since you can mount it as if it was a local file system. We use NGINX as a reverse proxy behind this primary/backup architecture. We use Consul Template to drive the NGINX reverse proxy. When something changes concerning the distributed lock in consul, the config of NGINX is changed and NGINX is restarted. Only NGINX needs to be exposed with a public IP. Security considerations: The gossip protocol (communication inside consul cluster) can be encrypted using a SHA key (provided in the topology). Communication between consul clients (A4C and ConsulTemplate) and Consul agents can be securized using SSL. NGINX can expose a HTTPS endpoint but redirect to a HTTP alien without SSL (if you trust your private network). Known limitations: The samba server is not securized. The ElasticSearch cluster can not be securized in this topology . But of course, you can use your own securized ES cluster. We use a single CA certificate (provided in the topology) to generate all the keys used for SSL communications (HTTPS for NGINX, HTTPS for A4C, TLS for Consul). Manual configuration If you wish to use custom scripts in order to perform the installation you can find here how we configure the consul server and consul template element to perform re-configuration of the ngnix component. Consul server configuration The configuration is performed in the configure.sh script and is detailed here. In order to configure the consul server we decided to split the configuration in multiple files all placed in /etc/consul . Consul will look for json files in this directory and process them in order. The first file 01_basic_config.json contains the generic configuration of our consul agent and is used both for clients and server agents. It has the following content { \"datacenter\" : \"a4c\" , \"data_dir\" : \"%CONSUL_DATA_DIR%\" , \"log_level\" : \"trace\" , \"node_name\" : \"%INSTANCE%\" , \"client_addr\" : \"0.0.0.0\" , \"bind_addr\" : \"%CONSUL_BIND_ADDRESS%\" , \"ports\" : { \"http\" : %CONSUL_API_PORT% } } Where * %CONSUL_DATA_DIR% should be replaced with the path to the directory you want consul to store data in. * %INSTANCE% should be replaced with a unique name for the node * %CONSUL_BIND_ADDRESS% should be replaced with the ip of the NIC used for communication with the other consul agents (server and clients) * %CONSUL_API_PORT% should be replaced with the port of the consul API (default should be 8500) 02_server_config.json contains the specific configuration for consul server nodes and has the following content: { \"server\" : true , \"bootstrap_expect\" : %INSTANCES_COUNT% } Where %INSTANCES_COUNT% should be the expected number of nodes in your consul server cluster (3 is a good number). In case you want to specify a key to encrypt gossip exchanges into consul cluster you can add the 03_encrypt_config.json file with the following content: { \"encrypt\" : \"%ENCRYPT_KEY%\" } Where %ENCRYPT_KEY% should be replaced with the desired string. Finally to add SSL configuration to the API we add a 04_server_secured_config.json file with the following content: { \"ports\" : { \"dns\" : -1 , \"http\" : -1 , \"https\" : -1 }, \"key_file\" : \"/etc/consul/ssl/server-key.pem\" , \"cert_file\" : \"/etc/consul/ssl/server-cert.pem\" , \"ca_file\" : \"/etc/consul/ssl/ca.pem\" , \"verify_incoming\" : true , \"verify_outgoing\" : true } To start the server nodes you can use the following script: #!/bin/bash -e nohup sudo bash -c 'consul agent -ui -config-dir /etc/consul > /var/log/consul/consul.log 2>&1 &' >> /dev/null 2> & 1 & echo \"Joining cluster by contacting following member ${CONSUL_SERVER_ADDRESS}\" sudo consul join ${ CONSUL_SERVER_ADDRESS } With a CONSUL_SERVER_ADDRESS environment variable that contains the comma separated list of IP adresses of the members of the consul cluster. Consul client configuration The client nodes of consul are required to be installed on the machines of the alien4cloud server as well as the machine that host the ngnix load balancer/reverse proxy. Both of the consul clients have the same configuration as a server node with the critical difference that it does not contains the 02_server_config.json configuration file that is specific to the server configuration node. In addition, in our installation we rename the 04_server_secured_config.json file to 04_client_secured_config.json for better consistency in naming and should have the following content: { \"client_addr\" : \"127.0.0.1\" , \"ports\" : { \"dns\" : -1 , \"http\" : -1 , \"https\" : %CONSUL_API_PORT% }, \"key_file\" : \"/etc/consul/ssl/client-key.pem\" , \"cert_file\" : \"/etc/consul/ssl/client-cert.pem\" , \"ca_file\" : \"/etc/consul/ssl/ca.pem\" , \"verify_outgoing\" : true } Where %CONSUL_API_PORT% should be replaced with the same port specified in the 01_basic_config.json server files. To start the client nodes you can use the following script: #!/bin/bash -e nohup sudo bash -c 'consul agent -config-dir /etc/consul > /var/log/consul/consul.log 2>&1 &' >> /dev/null 2> & 1 & echo \"Joining cluster by contacting following member ${CONSUL_SERVER_ADDRESS}\" sudo consul join ${ CONSUL_SERVER_ADDRESS } With a CONSUL_SERVER_ADDRESS environment variable that contains the comma separated list of IP adresses of the members of the consul server cluster. Consul template configuration for automatic ngnix update. Consul template will be responsible for updating the ngnix configuration when the alien4cloud master node changes. In order to configure it we will create a /etc/consul_template/consul_template.conf file with the following content: consul = \"127.0.0.1:%CONSUL_API_PORT%\" template { source = \"/etc/consul_template/nginx.conf.ctpl\" destination = \"/etc/nginx/sites-enabled/default\" command = \"sudo /etc/init.d/nginx reload\" } ssl { enabled = %TLS_ENABLED% verify = true cert = \"/etc/consul_template/ssl/client-cert.pem\" key = \"/etc/consul_template/ssl/client-key.pem\" ca_cert = \"/etc/consul_template/ssl/ca.pem\" } Where %CONSUL_API_PORT% should be replaced with the actual port of the consul client agent running on the ngnix host. If you have enabled TLS please make sure that %TLS_ENABLED% is replaced by true (recommend setting). Finally add a template file /etc/consul_template/nginx.conf.ctpl so consul template can update the ngnix configuration with the following configuration (this configuration is for a secured alien4cloud server). {{ if key \"service/a4c/leader\" }} server { listen %LISTEN_PORT% default ssl ; server_name %SERVER_NAME% ; ssl_session_cache shared:SSL:1m ; ssl_session_timeout 10m ; ssl_certificate /etc/nginx/ssl/server-cert.pem ; ssl_certificate_key /etc/nginx/ssl/server-key.pem ; ssl_verify_client off ; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2 ; ssl_ciphers RC4:HIGH:!aNULL:!MD5 ; ssl_prefer_server_ciphers on ; location / { proxy_pass https:// {{ key \"service/a4c/leader\" }} / ; proxy_ssl_session_reuse on ; proxy_set_header Host %SERVER_NAME% ; proxy_pass_request_headers on ; # Force https proxy_redirect http:// https:// ; } } {{ else }} server { listen %LISTEN_PORT% default_server ; listen [ :: ] :%LISTEN_PORT% default_server ipv6only = on ; ssl on ; ssl_certificate /etc/nginx/ssl/server-cert.pem ; ssl_certificate_key /etc/nginx/ssl/server-key.pem ; ssl_session_cache shared:SSL:10m ; root /usr/share/nginx/html ; index index.html index.htm ; server_name %SERVER_NAME% ; location / { try_files $uri $uri / = 404 ; } } {{ end }} Where %LISTEN_PORT% should be replaced with the port on which you want to expose the ngnix server and %SERVER_NAME% should be replaced with the public ip address of the ngnix host. Finally you can start the consul template process using the following script (note that ngnix installation must be done prior to consul template installation): nohup sudo bash -c '/var/lib/consul_template/consul-template -config /etc/consul_template/consul_template.conf >> /var/log/consul_template/consul_template.log 2>&1' >> /dev/null 2> & 1 & Alien Configuration We will detail here the different configuration items you can change in A4C config related to the usage of the HA plugin. Property Name Default value Details ha.ha_enabled false If true, enable HA and use the following properties. If false, the following properties are ignored. ha.consulAgentIp localhost The ip address of the consul agent (server or client) to connect to. ha.consulAgentPort 8500 The port of the consul agent. ha.instanceIp The IP address of the alien instance (used for health check, so this address should be visible from the consul agent, can be localhost if the agent is on the same machine). ha.healthCheckPeriodInSecond 5 The delai in seconds between each health check query done by the consul agent. ha.consulSessionTTLInSecond 60 The duration in seconds of the consul session. This session will be renewed before this delai expire. ha.consulLockDelayInSecond 0 The delai between the session invalidation and the lock release. 0 is a good value since we want a new leader to be elected if the primary crash. ha.lockAcquisitionDelayInSecond 20 In second, the delai before trying to acquire a lock when after a failure (when consul is not reachable for example. ha.consul_tls_enabled false When true, use https to talk to consul (and then, need for a keystore and a truststore to be configured). ha.keyStorePath The key store for SSL connection to consul. ha.keyStorePwd The password for keystore. ha.trustStorePath The truststore for SSL connection to consul. ha.trustStorePwd The password for truststore. ha.serverProtocol http The protocol where the alien instance can be contacted (use to build the health check url). Just set to ‘https’ if alien ssl is on. ha.health_disk_space_threshold 10 * 1024 * 1024 (10 Mo) The health check endpoint will check the remaining disk space on the host of the A4C instance. Under this threshold, the health check will fail. ha.consulQueryTimeoutInMin 3 The HA plugin use this timeout when querying consul with blocking read. ha.consulConnectTimeoutInMillis 1000 * 30 (30 seconds) TCP connection timeout when querying consul. ha.consulReadTimeoutInMillis 1000 * 60 * 5 (5 minutes) TCP read timeout when querying consul. "},{"title":"ALIEN 4 Cloud","baseurl":"","url":"/community/index.html","date":null,"categories":[],"body":" Twitter We are on on twitter , so feel free to follow us and get the latest news on Alien 4 Cloud. Source code ALIEN 4 Cloud is open source under the Apache 2 License. The code is hosted on GitHub . You are welcome to clone, fork and submit pull-requets. The documentation is also hosted on github and you can also help us to improve it. "},{"title":"ALIEN 4 Cloud","baseurl":"","url":"/roadmap/index.html","date":null,"categories":[],"body":" Roadmap The following represents current view of Alien4Cloud development team of its product development cycle and future directions. It is intended for information purposes only, and should not be interpreted as a commitment, though we do our best to reach the dates and features set mentioned below. Premium features, support and certifications are available under a subscription, delivered by FastConnect , a Bull, Atos technologies subsidiary. Contact us for more details. We deliver major releases every 4 month. Please note that we also deliver intermediate sprint releases in Alien4Cloud GitHub repo , every 3 weeks, in order to get feedback on features while still in development. Beginning of 2018 v 2.0 Core Container/Docker: Design containers based portable blueprints deployable on different Docker infrastructure with support of their specificities Define a group of containers as a single unit deployment to ensure co-localization and common usage of resources Post-matching with containers platform specific properties Kubernetes support DNS & Kubernetes Services support improvements Integration with external registry for public network exposed services AWS ECS support Ease usage of docker image with the ability to reference an image from a public or a private repository Display number of deployed containers and metrics on dashboards (Max & current) Improve variables/inputs management across environments TOSCA Catalog: Upload an image for a substituted type Access to the A4C forge: public and private blueprints New policies engine: Manage placement policies Affinity - Anti-Affinity policies Multi zones placement policies: Manage HA or topology performance by associating nodes to zones Manage runtime policies Auto-scaling integration Auto-Healing Supported containers platform policies Kubernetes Aws ECS Workflows: Improved worflows editor to support operations on relationships Launch a custom workflow on a specific instance of a scaled node Display deployment progress on the workflow view Deployment: Display deployments history at the application environment level Retrieve a deployed topology (Blueprint, location, matching, variables,inputs) Monitoring integration: New monitoring tools/ Status feedback loop (For VMs, Containers, Containers scheduler) Viewing metrics and detail cluster state information Security: Secrets management Improve application password management (UI - API) Out of the box Vault solution and support for custom vault integration Generate security groups from A4C for AWS, Azure, OpenStack Manage infrastructures resources security by environments type User experience: Streamlined process deployments Improved Archives/Catalog screen Orchestrators certifications Cloudify v4.0 Infrastructure support Improve integration capabilities with orchestrators and implementation artifacts support Certification for following infrastructures/IaaS Amazon Web Services Microsoft Azure VMWare VSphere Openstack Host-pool (physical machines) TOSCA conformance Improve networks support to reach TOSCA spec 1.1 Support for alien 4 cloud 1.4 and 2.0 DSL Additional Premium features v1.4 to v2.0 Migration tools To be defined Future Core TOSCA evolutions support Enhanced extensibility & pluggability Quotas management Multi locations 24x7 app-centric placement & runtime policies Blueprints Developer experience improvements More orchestrators, infrastructures & artefacts coverage Google Cloud Docker swarm Cloud Foundry integration OpenShift integration Docker schedulers integration improvements Additional Premium features NFV/SDN capabilities Application Lifecycle Management features & Promotion workflow Analytics and governance dashboards Simplified Self Service Chargeback & billing platforms integration Cloud brokering If you want to propose other features and/or if you are willing to contribute, please contact us . "},{"title":"Cloudify 4","baseurl":"","url":"/documentation/1.4.0/orchestrators/cloudify4_driver/index.html","date":null,"categories":[],"body":"cloudify 4 is an opensource orchestrator backed by GigaSpaces that aims to support deployment on various different locations. This section gives a focus to cloudify 4 orchestrator plugin for ALIEN, a plugin to manage deployment on various cloud using cloudify 4.x . Alien 4 Cloud cloudify 4 Support The alien 4 cloud plugin for cloudify 4 exposes several nodes so that TOSCA templates can be deployed on cloudify 4 to various locations, such as Amazon , Openstack , etc… . This allows full portability of topologies (or blueprints) that you have designed. See Supported locations for more details. cloudify 4 currently manages the deployment and un-deployment of blueprints and support the ability to trigger custom workflows that have been shipped within the blueprint at runtime. Out of the box cloudify 4.0 doesn’t support policies like auto-healing. Auto-healing As stated previously cloudify 4.3 doesn’t provide support for auto-healing of services. It provide a basic monitoring feature that we implement in the blueprint we generate from the TOSCA model. This basic monitoring is based on Machine status and not software status meaning that if one of the software in a blueprint crash it won’t be detected by the cloudify 4. We developed as part of Alien 4 Cloud the ability to generate a cron-based mecanism that check the monitoring data in order to trigger an auto-healing workflow. This implementation is quite naïve for now and is disabled by default on deployments but can be enabled per deployment through an orchestrator property. Scalability Scalability behavior is currently not deeply specified in TOSCA. When scaling a node, all nodes that are hosted_on the given node are scaled. Also, nodes that are connected or attached to it, this includes block-storage and floating ips. Since cloudify 3.4, scaling handling has been improved. However, refers to the table below to see IaaS limitations. There is currently some missing details in the TOSCA specification on how relationships can be impacted in scaling scenarios and we are working with bot Cloudify and TOSCA to enhance the specification. IaaS limitations Here is a table that shows the limitations about scaling per IaaS: OpenStack Amazon BYON Azure ( Premium ) vSphere ( Premium ) Single Compute Compute + Network + Block Storage N/A Block storage recovery limitation Block storage recovery is the ability to reuse an already created block storage. Imagine you have a topology containing a compute with a block storage attached on it. You deploy the topology, the VM is started, the block storage is provisionned and attached to the VM, and then some process write data to the disk. If you don’t use a DeletableBlockStorage , this means that you want the data written to the disk to be persistent : if the topology is undeployed then deployed again, you want the block storage to be reused. To manage such feature, A4C will keep a trace of the volume ID and store it in the deployment topology. When the topology is deployed again, this volume id is used to find the volume in the the IaaS rather than provisoning another one. "},{"title":"Cloudify 3","baseurl":"","url":"/documentation/1.4.0/orchestrators/cloudify3_driver/index.html","date":null,"categories":[],"body":"Cloudify 3 is an opensource orchestrator backed by GigaSpaces that aims to support deployment on various different locations. This section gives a focus to Cloudify 3 orchestrator plugin for ALIEN, a plugin to manage deployment on various cloud using Cloudify 3.x . Alien 4 Cloud Cloudify 3 Support The alien 4 cloud plugin for cloudify 3 exposes several nodes so that TOSCA templates can be deployed on Cloudify 3 to various locations, such as Amazon , Openstack , etc… . This allows full portability of topologies (or blueprints) that you have designed. See Supported locations for more details. Cloudify 3 currently manages the deployment and un-deployment of blueprints and support the ability to trigger custom workflows that have been shipped within the blueprint at runtime. Out of the box cloudify 3.3 doesn’t support policies like auto-healing and have a very limited support for Scalability that causes issues in various scenarios. Note that Cloudify guys are working on 3.4 that should provide much better support for both HA and Scalability concern. Auto-healing As stated previously cloudify 3.3 doesn’t provide support for auto-healing of services. It provide a basic monitoring feature that we implement in the blueprint we generate from the TOSCA model. This basic monitoring is based on Machine status and not software status meaning that if one of the software in a blueprint crash it won’t be detected by the cloudify 3. We developed as part of Alien 4 Cloud the ability to generate a cron-based mecanism that check the monitoring data in order to trigger an auto-healing workflow. This implementation is quite naïve for now and is disabled by default on deployments but can be enabled per deployment through an orchestrator property. Scalability Scalability behavior is currently not deeply specified in TOSCA. When scaling a node, all nodes that are hosted_on the given node are scaled. Also, nodes that are connected or attached to it, this includes block-storage and floating ips. As the new Cloudify 3.4 has improved his way of handling the scaling, we do not need anymore, as it was the case for the previous version Alien4Cloud 1.2.x, the workaround scaling plugin and configurations. However, refers to the table below to see IaaS limitations. There is currently some missing details in the TOSCA specification on how relationships can be impacted in scaling scenarios and we are working with bot Cloudify and TOSCA to enhance the specification. IaaS limitations Here is a table that shows the limitations about scaling per IaaS: OpenStack Amazon BYON Azure ( Premium ) vSphere ( Premium ) Single Compute Compute + Network + Block Storage N/A Block storage recovery limitation Block storage recovery is the ability to reuse an already created block storage. Imagine you have a topology containing a compute with a block storage attached on it. You deploy the topology, the VM is started, the block storage is provisionned and attached to the VM, and then some process write data to the disk. If you don’t use a DeletableBlockStorage , this means that you want the data written to the disk to be persistent : if the topology is undeployed then deployed again, you want the block storage to be reused. To manage such feature, A4C will keep a trace of the volume ID and store it in the deployment topology. When the topology is deployed again, this volume id is used to find the volume in the the IaaS rather than provisoning another one. "},{"title":"ALIEN 4 Cloud - 1.4.0 - Documentation","baseurl":"","url":"/documentation/1.4.0/index.html","date":null,"categories":[],"body":"Welcome on Alien 4 Cloud documentation. You will find here resources to use alien 4 cloud. This includes: Concepts of Alien 4 Cloud Installation and configuration Creation of cloud services archives (including an overview of OASIS TOSCA concepts) ALIEN for Cloud High level concept ALIEN for Cloud (Application LIfecycle ENabler for cloud) is a tool that aims to provide management for enterprise cloud and help enterprise to onboard their applications to a cloud, leverage it’s benefits and, based on project constraints, reach continuous delivery. As moving to the cloud for an enterprise is a structural change, ALIEN for Cloud leverage the TOSCA standard that is the most advanced and supported standard for the cloud. The Goal of ALIEN for Cloud is to enable the benefits of a cloud migration in enterprise by easing the DevOps collaboration taking in account the capabilities of each of the IT expert in the enterprise. This is done by providing a single platform where every expert can contribute and share it’s effort and feedback with others. ALIEN provides three main features: Composable PaaS & DevOps collaboration Application lifecycle enablement Cloud governance Collaboration Collaboration in ALIEN for cloud is done by giving the ability to each expert to work on it’s field of expertise, and for other experts to benefits from his work and reuse it in a simple and declarative way. TOSCA standard is a great start point to enable such collaboration. ALIEN for Cloud add user roles management in order to increase the ability to easily collaborate on the platform. Composable PaaS Topology definition in ALIEN for cloud The first aspect of ALIEN for cloud is related to the core of the cloud interoperability: defining an application topology that we can deploy on any cloud. It takes in account critical requirements for an enterprise: reusability extensibility flexibility consistency evolvability (Very) Quick introduction to TOSCA In the TOSCA model, an Application Topology is created by declaration of some components (nodes) instances (templates) based on some existing types. The types defines the meta-model of a component (properties, operations, capabilities and requirements) and it’s implementation artifacts. A TOSCA container can then deploy the declared topology on a cloud and orchestrated it. Collaboration in a TOSCA model is easy as someone that want’s to build an application topology can reuse components created by the experts. Typically an application architect will be able to reuse software and cloud components defined by the operational teams in the enterprise. Application lifecycle enablement Alien 4 Cloud allows users to define multiple versions and environments for an application, each environment has an associated version allowing you to move a version from a development environment to the production environment through all required environments in your lifecycle. Cloud governance As ALIEN for cloud manages the topologies of applications as well as their deployments, it keeps many informations that will enable governance of your cloud, a better vision of your applications, their lifecycle, the ability of your projects to deliver fast etc. It enables features like rationalisation of the SI capacity planning management of middleware support and expiration dates etc. "},{"title":"ALIEN 4 Cloud","baseurl":"","url":"/index.html","date":null,"categories":[],"body":" Alien4Cloud is an open source platform that makes application management on the cloud easy for enterprises Get Started Download Single click deploy to any target Easy self-service consumption Manageable IT infrastructure, private or public, at any scale Open and extensible Manage your deployments Dev friendly Spread knowledge and best-practices What can a4c do for you Single click deploy to any target Advanced matching engine allows you to deploy your portable topology to any target! Without any modification just choose to deploy to AWS, Azure, OpenStack, your physical machines etc. Have a docker based application ? Deploy your docker applications seamlessly on Marathon or Kubernetes etc. Easy self-service consumption Allow your teams to access IT resources straightaway with a fine rights management. Design portable topologies/blueprints using a simple drag & drop editor. Share your topologies with other users in the system, manage your applications topologies. Manageable IT infrastructure, private or public, at any scale Provide self-service on pre-configured infrastructure resources (cloud, bare metal, already running services) with a comprehensive system of rights and resources access management. Open and extensible Add your own devops components, add support for your own custom cloud, leverage any API. Integrate with any of your favorite tools. Extend alien4cloud UI or rest services to add or even override existing screens. Manage your deployments Deploy your application on the selected cloud using our single click deploy feature, upgrade existing deployments and get immediate feedback on the deployment process. Dev friendly Comprehensive versions and environments management allows you to build your continuous delivery lifecycle including any complex deployment requirements. Run easily your workloads on any versions including integration testing and load testing. Our full-packaged model including from the infrastructure element to the application allow you to easily recreate any of your version environment to reproduce bugs. Command line addict ? You can interact with A4C using rest API and simple curl Requests. Spread knowledge and best-practices Package your DevOps IT artifacts in reusable TOSCA components and make them available to other in a self-service catalog. Expose your existing artifacts, from Docker containers to classical Shell, Ansible, Batch, Chef, Puppet in a composable and reusable way for non-expert consumers etc. Videos Success story Société générale reduced by 66% its time to market. Deployment times in minutes Close to zero incident rate on deployment Application and environments 100% synced (deployment model always up to date) Use case Kubernetes use case Contact Need more information? Get in touch "},{"title":"Administration Guide","baseurl":"","url":"/documentation/1.4.0/admin_guide/index.html","date":null,"categories":[],"body":"This section contains the guide for Alien 4 Cloud installation and configuration. Alien 4 Cloud requires orchestrator(s) in order to deploy on the configured clouds. Orchestrators plugins are currently not able to install the related orchestrator and deploy it on the runtime clouds. "},{"title":"Installing and configuring","baseurl":"","url":"/documentation/1.4.0/orchestrators/marathon_driver/install_config.html","date":null,"categories":[],"body":"Here is the procedure to install and configure the Marathon driver. Download last stable version works with the latest stable alien version. Install The driver is packaged as an ALIEN plugin, install it in Administration > Plugins of your running instance of ALIEN. using the plugin view. Create the orchestrator Login as an admin, and create an orchestrator: Administration > Orchestrators > New Orchestrator . As Plugin for this orchestrator, make sure to select Marathon from the list. Validate. Configure the orchestrator On the orchestrator list, select and click on the newly created orchestrator. Set up the orchestrator by simply giving Marathon’s URL. Create an empty location - you don’t need to create any resources for now. "},{"title":"Installing and configuring","baseurl":"","url":"/documentation/1.4.0/orchestrators/cloudify3_driver/install_config.html","date":null,"categories":[],"body":"Find here how to install and configure the Cloudify 3 driver. Download First step of course is to download the plugin. last stable version works with the latest stable alien version. last build version works with the latest build alien version. Install The driver is packaged as an ALIEN plugin, install it in Administration > Plugins of your running instance of ALIEN. Configure You need to create an orchestrator and configure it. Creating the orchestrator Login as an admin, and create an orchestrator: Administration > Orchestrators > New Orchestrator . As Plugin for this orchestrator, make sure to select Cloudify 3 Orchestrator from the list. Validate. Configuring the orchestrator On the orchestrator list, select and click on the newly created orchestrator, follow these steps to configure your orchestrator: Connection Configuration : Click on the Configuration link to configure connection to your bootstrapped cloudify’s manager. In the Driver configuration section, change the URL to use the correct IP of your manager that you obtained after the bootstrap operation. If your manager is secured, you can configure the admin credentials, the disableSslVerification option should only be set to true for testing purpose, it will disable all certificate validation for SSL.The connection timeout in milliseconds between A4C and Cloudify instance can be configured with the property connectionTimeout . Enable Orchestrator : You can then switch back to the Information tab and enable the orchestrator by clicking on the Enable orchestrator button. If the orchestrator is not enabled, please check Alien’s log to have details on the error, it might be a bad configuration (bad connection url, bad user/password, invalid certificate etc…) Locations : An orchestrator can manage multiple locations, for example, you can have the same orchestrator which manages your local cloud and your public cloud. It is possible for the same deployment to span on multiple locations. For the moment cloudify 3 only supports single location, so we can only have 1 location per cloudify 3 orchestrator. Click on New Location , in the popup, enter the name of your location and its type. Configuration Resources : The configuration resources are not real IAAS resources as such. In general they are configurations for other resources. Choose the type of your resource, then click on Add to create a new one In this example, you have a configuration resource of type Image on an OpenStack location, you can describe here the details of the image, such as os type (linux, windows …), distribution(Ubuntu, CentOS …), which must correspond to what you have on your IAAS. The same thing can be done for the types flavor and availability zone. On Demand Resources : On demand resources are real IAAS resources that can be used to replace abstract resources in a topology. Click on Auto-config to generate on demand resources from precedent configuration resources. As you can see below, with the Image Ubuntu and the Flavor Medium, Alien generated a Compute template Medium_Ubuntu You can always configure your resources (in this case compute) without using the Auto-config functionality. To create resources that cannot be auto-configured (such as volume or network or non auto-configured compute etc …), choose the type of the resource, then click on Add . Concrete examples of configuration can be found in our various integration tests for Openstack , AWS , BYON Congratulation!! You’ve finished to configure your cloudify 3 orchestrator. You can now begin to deploy your application with this orchestrator. Offline environment In order to deploy applications in an offline environment, you will need to add some libraries in your PyPI repository and make it available to the manager depending on the IaaS you are targetting. Dependencies for Amazon Name Version boto 2.38.0 pycrypto 2.6.1 Dependencies for Azure Name Version azure-storage 0.33.0 pyyaml 3.10 requests 2.7.0 Dependencies for Openstack Name Version python-cinderclient 1.2.2 python-keystoneclient 1.6.0 python-neutronclient 2.6.0 python-novaclient 2.26.0 IPy 0.81 Dependencies for vSphere Name Version netaddr 0.7.18 pyvmomi 5.5.0.2014.1.1 pyyaml 3.10 Dependencies for BYON (host-pool-plugin) No extra dependencies needed "},{"title":"Installing and configuring","baseurl":"","url":"/documentation/1.4.0/orchestrators/cloudify4_driver/install_config.html","date":null,"categories":[],"body":"Find here how to install and configure the cloudify 4 driver. Download First step of course is to download the plugin. last stable version works with the latest stable alien version. last build version works with the latest build alien version. Install The driver is packaged as an ALIEN plugin, install it in Administration > Plugins of your running instance of ALIEN. Configure You need to create an orchestrator and configure it. Creating the orchestrator Login as an admin, and create an orchestrator: Administration > Orchestrators > New Orchestrator . As Plugin for this orchestrator, make sure to select cloudify 4 Orchestrator from the list. Validate. Configuring the orchestrator On the orchestrator list, select and click on the newly created orchestrator, follow these steps to configure your orchestrator: Connection Configuration : Click on the Configuration link to configure connection to your bootstrapped cloudify’s manager. In the Driver configuration section, change the URL to use the correct IP of your manager that you obtained after the bootstrap operation. If your cloudify manager is actually a cluster of instances you can specify a comma separated list of urls rather than a single url so the alien4cloud cloudify4 plugin will find the active member and fail-over automatically in case of failure of one of the node. Don’t forget to set the port of log application. If your manager is secured, you can configure the admin credentials, the disableSslVerification option should only be set to true for testing purpose, it will disable all certificate validation for SSL. The connection timeout in milliseconds between A4C and Cloudify instance can be configured with the property connectionTimeout . Retry mechanism A retry mechanism will be enabled if you configure Cloudify in H.A (more than 1 instance). In H.A. a request that fails because of connection issues will be retried straight away on another available Cloudify instance. In case none of Cloudify instances respond sucessfully A4C will wait for failOverDelay in milliseconds and try to contact one more time all the Cloudify instances (up to failOverRetry times or a working instance has been found). Enable Orchestrator : You can then switch back to the Information tab and enable the orchestrator by clicking on the Enable orchestrator button. If the orchestrator is not enabled, please check Alien’s log to have details on the error, it might be a bad configuration (bad connection url, bad user/password, invalid certificate etc…) Locations : An orchestrator can manage multiple locations, for example, you can have the same orchestrator which manages your local cloud and your public cloud. It is possible for the same deployment to span on multiple locations. For the moment cloudify 4 only supports single location, so we can only have 1 location per cloudify 4 orchestrator. Click on , in the popup, enter the name of your location and its type. Configuration Resources : The configuration resources are not real IAAS resources as such. In general they are configurations for other resources. Choose the type of your resource, then click on Add to create a new one In this example, you have a configuration resource of type Image on an OpenStack location, you can describe here the details of the image, such as os type (linux, windows …), distribution(Ubuntu, CentOS …), which must correspond to what you have on your IAAS. The same thing can be done for the types flavor and availability zone. On Demand Resources : On demand resources are real IAAS resources that can be used to replace abstract resources in a topology. Click on to generate on demand resources from precedent configuration resources. As you can see below, with the Image Ubuntu and the Flavor Medium, Alien generated a Compute template Medium_Ubuntu You can always configure your resources (in this case compute) without using the Auto-config functionality. To create resources that cannot be auto-configured (such as volume or network or non auto-configured compute etc …), choose the type of the resource, then click on Add . Concrete examples of configuration can be found in our various integration tests for Openstack , AWS , BYON Congratulation!! You’ve finished to configure your cloudify 4 orchestrator. You can now begin to deploy your application with this orchestrator. Offline environment In order to deploy applications in an offline environment, you will need to add some libraries in your PyPI repository and make it available to the manager depending on the IaaS you are targetting. Dependencies for Amazon Name Version boto 2.38.0 pycrypto 2.6.1 Dependencies for Azure Name Version azure-storage 0.33.0 pyyaml 3.10 requests 2.7.0 Dependencies for Openstack Name Version python-cinderclient 1.2.2 python-keystoneclient 1.6.0 python-neutronclient 2.6.0 python-novaclient 2.26.0 IPy 0.81 Dependencies for vSphere Name Version netaddr 0.7.18 pyvmomi 5.5.0.2014.1.1 pyyaml 3.10 Dependencies for BYON (host-pool-plugin) No extra dependencies needed "},{"title":"Installation and configuration","baseurl":"","url":"/documentation/1.4.0/admin_guide/installation_configuration.html","date":null,"categories":[],"body":" This section describe installation and configuration of Alien 4 Cloud for a production mode. If you whish to use Alien 4 Cloud for a demo or development mode please refer to the getting started guide. Supported platforms To get more informations about the supported platforms, please refer to this section . Ports requirements To get more informations about the ports requirements, please refer to this section . Alien 4 Cloud configuration Alien 4 Cloud contains a basic configuration that is good enough for test environment. However in order to move into production or in order to integrate with other systems (as LDAP for example), you need to define an advanced configuration. In order to provide configuration to Alien 4 Cloud, you must place an Alien configuration file in a config folder along-side to the Alien 4 Cloud war. ├── alien4cloud-ui- { version } -standalone.war ├── config/alien4cloud-config.yml ├── config/elasticsearch.yml You can find default configurations for both files in the GitHub repository: alien4cloud-config.yml elasticsearch.yml You can also add a simple start script: ├── start.sh ├── alien4cloud-ui- { version } -standalone.war ├── config/alien4cloud-config.yml ├── config/elasticsearch.yml cd ` dirname $0 ` JAVA_OPTIONS = \"-server -showversion -XX:+AggressiveOpts -Xmx2g -Xms2g -XX:MaxPermSize=512m -XX:+HeapDumpOnOutOfMemoryError\" java $JAVA_OPTIONS -jar alien4cloud-ui-1.4.0- { version } -standalone.war JVM tunning See JVM tunning section for advanced Alien4Cloud JVM options. Logging configuration If you need to customize log4j2 (in order to activate some loggers, change the log file location …) add a log4j2.xml in the config folder and specify the classpath for java : java $JAVA_OPTIONS -cp config/:alien4cloud-ui-1.4.0- { version } -standalone.war org.springframework.boot.loader.WarLauncher You can find a log4j2 sample configuration file at log4j2.xml For example, to use Alien with the level debug : Replace <root level = \"info\" > by <root level = \"debug\" > Specific appender for the deployment logs Premium feature This section refers to a premium feature. Alien4Cloud premium offer the possibilty to see / search deployment logs from premium orchestrators. Since Alien 1.4, a specific logger is used for this events. <logger name = \"DEPLOYMENT_LOGS_LOGGER\" level = \"info\" additivity = \"false\" > <AppenderRef ref = \"DEPLOYMENT_LOGS_APPENDER\" /> </logger> You can enable this logger in alien4cloud-config.yml : logs_deployment_appender: enable : true This logger has a rolling file appender, you can adapt it to your requirements. By default, logs older than 30 days are automatically deleted. For example, you can change this time retention in the log4j2.xml config to 10mn: Replace <IfLastModified age = \"30d\" /> by <IfLastModified age = \"10mn\" /> Audit configuration You can personalize the operations audit in it’s configuration page. You can select for each controller the rest method to monitor. "},{"title":"Interface definition","baseurl":"","url":"/documentation/1.4.0/devops_guide/tosca_grammar/interface_definition.html","date":null,"categories":[],"body":"An interface definition defines a named interface that can be associated with a Node or Relationship types and templates. Keynames Keyname Required Type Description tosca_definitions_version description (1) no string An optional description for the interface. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 inputs no string The optional list of input parameter definitions, common to all underling operations alien_dsl_1_4_0 (1) TOSCA 1.0.0 does not specifies description for interface definition but this sounds more like a miss. Therefore our tosca_simple_yaml_1_0 support includes support for description keyword. Grammar <interface_definition_name> : inputs : <parameter_definitions> <operation_definition_1> ... <operation_definition_n> Example The following example shows how to define a node type with operation: node_types : fastconnect.nodes.OperationSample : interfaces : Standard : desciption : Normative interface that defines a node standard lifecycle. create : /scripts/install.sh configure : description : This is the configuration description. implementation : /scripts/setup.sh inputs : value_input : 4 "},{"title":"Kubernetes (Beta)","baseurl":"","url":"/documentation/1.4.0/orchestrators/cloudify3_driver/kubernetes.html","date":null,"categories":[],"body":"This page helps you configure and deploy a hybrid nodecellar application on Kubnernetes through Alien4Cloud/Cloudify. Prerequisites A Kubernetes cluster A Cloudify Manager The Cloudify Manager has a kubectl installed on it. The Cloudify manager must have access to Kubernetes REST API. The Kubernetes nodes should eventually be able to communication with Cloudify’s agents. In our case, the Nodecellar node deployed on Kubernetes shouls be able to contact its MongoDB database deployed on Openstack through the port 3000. You can deploy a Kubernetes Cluster with Alien4Cloud using our components: Kubernetes components Kubernetes topology Configurations When configuring your orchestrator , before enabling it, configure the Kubernetes URL and port. Upload TOSCA Components In order to build our hybrid nodecellar topology, you will need to import some TOSCA components. Follow the import from Git location instructions to import the docker-tosca-types into Alien4Cloud. Once uploaded, your Alien4Cloud will contains additional types: docker-types nodecellar-docker-types (with the 2 node types alien.nodes.Application.Docker.Nodecellar and alien.nodes.Application.Docker.Mongo ) and 2 topology templates: NodecellarDocker and NodecellarDockerHybrid Node that the NodecellarDocker won’t deploy right now since the Kubernetes plugin do not yet support volumes. But you can modify the topology by removing the 2 volumes and deploy the 2 containers. Deploy the Hybrid Topology Create a new application using the NodecellarDockerHybrid topology template. Finally, you just have to clic the deploy button and wait for the deployment to complete. And if you have access to the Kubernetes webui, you can check that you have a nodecellar deployed on it "},{"title":"Kubernetes (Beta)","baseurl":"","url":"/documentation/1.4.0/orchestrators/cloudify4_driver/kubernetes.html","date":null,"categories":[],"body":"This page helps you configure and deploy a hybrid nodecellar application on Kubnernetes through Alien4Cloud/Cloudify. Prerequisites A Kubernetes cluster A Cloudify Manager The Cloudify Manager has a kubectl installed on it. The Cloudify manager must have access to Kubernetes REST API. The Kubernetes nodes should eventually be able to communication with Cloudify’s agents. In our case, the Nodecellar node deployed on Kubernetes shouls be able to contact its MongoDB database deployed on Openstack through the port 3000. You can deploy a Kubernetes Cluster with Alien4Cloud using our components: Kubernetes components Kubernetes topology Configurations When configuring your orchestrator , before enabling it, configure the Kubernetes URL and port. Upload TOSCA Components In order to build our hybrid nodecellar topology, you will need to import some TOSCA components. Follow the import from Git location instructions to import the docker-tosca-types into Alien4Cloud. Once uploaded, your Alien4Cloud will contains additional types: docker-types nodecellar-docker-types (with the 2 node types alien.nodes.Application.Docker.Nodecellar and alien.nodes.Application.Docker.Mongo ) and 2 topology templates: NodecellarDocker and NodecellarDockerHybrid Node that the NodecellarDocker won’t deploy right now since the Kubernetes plugin do not yet support volumes. But you can modify the topology by removing the 2 volumes and deploy the 2 containers. Deploy the Hybrid Topology Create a new application using the NodecellarDockerHybrid topology template. Finally, you just have to clic the deploy button and wait for the deployment to complete. And if you have access to the Kubernetes webui, you can check that you have a nodecellar deployed on it "},{"title":"LAMP Stack Tutorial","baseurl":"","url":"/documentation/1.4.0/devops_guide/lamp_stack_tutorial/lamp_stack.html","date":null,"categories":[],"body":"This tutorial is based on the well known opensource stack LAMP and aims at getting started with a “real application case”. We will see all steps to go through the stack component definition and have a runnable example. The components of the Wordpress stack are in version 2.0.0. This version was released after some tests, with success, on Ubuntu 12.04 and 14.04. Regarding TOSCA component definition we are using the WD03 version for this tutorial. There is our full alien context to give a try to this tutorial : A4C element Usage TOSCA base types 1.0.0-ALIEN11 A4C WD03 tosca-notmative-types A4C Release 1.1 Alien4Cloud A4C Cloudify3 Driver 1.1 alien4cloud-cloudify3-provider 1.1 TOSCA base types Basicly to build our full application (topology), we will have a set of basic components defined in TOSCA. This components are added in Alien at the first bootstrap. More details about normative types . TOSCA definition is in constant evolution, so be sure you are using our fixed implementation given just above. Our components We will define our components and other “relational” items to link those components. This is the main component list : APACHE HTTP Server : http webserver to serve your website MySQL : relational database management system (RDBMS) PHP : server-side language used to interact with the database working with your HTML files This is the basic stack for a LAMP environment and in A4C context we will add one more components : Wordpress : this components will allow to install the Wordpress CMS on the Apache HTTP Server. Wordpress also need a PHP and a Mysql database. Server hosting The L in LAMP stand for Linux, so for our tutorial we assume that we’re working with Ubuntu 14.04 distribution as server. You must have an image on your targeted cloud based on it. We assume that you have an image in your cloud based on Ubuntu 12.04 or 14.04. BlockStorage To persist your data even after your application is undeployed, we will use this default component described in tosca base type 1.0.0-ALIEN11 and that allow us to have a volume created, mounted and attached to our server host. MySQL data will be stored on this volume. "},{"title":"Component Apache HTTP","baseurl":"","url":"/documentation/1.4.0/devops_guide/lamp_stack_tutorial/lamp_stack_apache.html","date":null,"categories":[],"body":"Apache HTTP server is a free software of the Apache software Foundation, created in 1995. Apache is the most popular web server on Internet and the web server of LAMP bundle. Used version for this tutorial : Apache HTTP Server This installation is based on Ubuntu distribution with apt-get command. Definition In the definitions folder, we need to write the TOSCA description of our component. It’s also a YAML file use to describe your component. The first line is the TOSCA definition version of the file. The second is a text description of the component. The tags icon is optional. Namming / description TOSCA assumes the existence of a normative base type set. The TOSCA type of Apache is the tosca.nodes.WebServer . Properties The Apache recipe has only two properties : Property Usage Comment version Mention the Apache HTTP Server version Constant version in our example (v2.4). port Port where to expose the Apache HTTP service The default port is : 80 . You can change it of course without using an already used one. document_root The Root Directory of apache2 The default value is : /var/www Lifecycle and related scripts In the interfaces we defined the script used to create the node. In our case we just use the create operation, see the documentation to see all possible operations. In the artifact, we define the folder that contains the script. As we are using Groovy artifact, we defined this artifact at the end of file. Operation Usage Comment create Executed script to install your apache http server on the Compute Through apt-get on ubuntu image start Start apache2 Restart apache2 if it’s already launched Optional : To test this Apache recipe, you could create a simple Topology with a Compute and an Apache : With a well configured PaaS Provider , you will have an Apache HTTP Server deployed on a server and ready to use. "},{"title":"Stack Application Topology","baseurl":"","url":"/documentation/1.4.0/devops_guide/lamp_stack_tutorial/lamp_stack_application.html","date":null,"categories":[],"body":"On this page we will create our topology representing the LAMP stack. Follow instructions step by step and at the end you will have your stack up and running. To be more concret we will use the Wordpress component to install a real CMS. The components of the Wordpress stack are in version 2.0.0. This version was released after some tests, with success, on Ubuntu 12.04 and 14.04. Prerequisites Get, checkout, download all components listed in the main page of this tutorial Import components of the Wordpress topology in A4C Configure your cloud plugin PaaS Provider Then compose you topology following the next steps On each Alien4Cloud page, on the right top corner, you have a button with a question mark [?]. Click to start a tour to explain what you can do in the current page and how to do it. Create the topology for the Wordpress application We have explained all components of our LAMP stack. Now, we will use these components to deploy a Wordpress on a cloud. To begin just for on Applications menu and create a New application then go on the application sub-menu Topology . You are now ready to compose you application. Let’s do it ! Step 1 : The Compute In this step, drag and drop a Compute into the topology design view. You need to specify two properties for this compute : type : linux architecture : x86_64 Step 2 : The BlockStorage Now, drag and drop a BlockStorage into the view. Select it by click and then attach it the Compute in the right Properties tab. Make sure to select the relation attachment with 1..1 constraint. In these properties tab view, set also the size value to 1 (GB by default). Step 3 : Apache, MySQL, PHP Then, drag and drop a MySQL , a PHP and an Apache onto the Compute existing node. For each new node droped onto Compute you will have to decide a target for the HostedOn relationship (generally just check the relationship name and click Finish ). Change the default value of MySQL or Apache if you want a custom install. Step 4 : The Wordpress The last component to add is the Wordpress . Drag and drop it to the Apache and select the WordpressHostedOn relationship between Wordpress and Apache . After this, create the relationship between Wordpress and MySQL and between Wordpress and PHP . Deployment Now we can deploy our topology into the cloud we’ve defined in prerequisite. The Deploy button is accessible in the Application sub-menu Deployments . To configure your Wordpress at your first run, open your web browser and go to IP_SERVER/CONTEXT_PATH . To configure your Wordpress , specifically for the MySQL settings, be sure you enter the settings you defined in your MySQL configuration. To check you running application, go on application Runtime sub-menu an select the node you want. In the Details tab automatically selected, you just need to select the instance line you want to have more details like ip_address and public_ip_address an try you application. Here we can select the Compute node and get the public IP to run Wordpress for the first time and later. "},{"title":"Component BlockStorage","baseurl":"","url":"/documentation/1.4.0/devops_guide/lamp_stack_tutorial/lamp_stack_blockstorage.html","date":null,"categories":[],"body":"This component represents a storage space / volume. This volume has to be attached to a compute to be used. For more details about this custom component : BlockStorage Used version for this tutorial (defined in normative types): BlockStorage Definition Namming / description Every component should at least inherite from tosca.nodes.Root . As a default normative type it’s the case for BlockStorage . Properties Check details : BlockStorage For the application you will need volume_id or size to be defined. Lifecycle and related scripts There is no lifecycle operation for this component in the default version. "},{"title":"Component MySQL","baseurl":"","url":"/documentation/1.4.0/devops_guide/lamp_stack_tutorial/lamp_stack_mysql.html","date":null,"categories":[],"body":"This component will install the MySQL RDBMS on the host server. Used version for this tutorial : MySQL This installation is based on Ubuntu distribution with apt-get command. Definition Let’s describe important parts of this full MySQL definition description. Namming / description The node name is important since it’s unique. We follow this template in A4C recipe development : [organisation].nodes.Name tosca_simple_yaml_1_0_0_wd03 : version of tosca used in the definition, let it as is it for the moment Our node name / id : alien.nodes.Mysql The parent : tosca.nodes.Database It’s a good practice to inherit from a base type to create your own component when it’s possible. Here tosca.nodes.Database . Properties All properties required or optional to use the component. MySQL proper properties : Property Usage Comment port port number injected in the MySQL installation Default : 3306 storage_path path where the blockstorage is mounted in the compute Constant value with the Cloudify Driver version we use in this tutorial. All blockstorage attached to a compute will have this mounted volume. bind_address Allow remote access to your server Default : true Properties inherited from its parent : tosca.nodes.Database Here we are overriding those properties from parent component and we describe a database with a user we want to create at initialization. Property Usage Comment db_name Database name we want to create wordpress to match to our final application case db_user Name of the user who will have rights on this database This user will have all privileges on this dedicated database db_password Password for this user … Lifecycle and related scripts The real script you will run during you different component life steps. Two main steps here in operations bloc : Operation Usage Comment create Executed script to install MySQL on the server Through apt-get on you ubuntu image start Executed script to configure MySQL to use a specific storage path (the blockstorage) Configured and started with specific ubuntu hints (rights concerns) "},{"title":"Component PHP","baseurl":"","url":"/documentation/1.4.0/devops_guide/lamp_stack_tutorial/lamp_stack_php.html","date":null,"categories":[],"body":"This component will install the PHP on the host server. Used version for this tutorial : PHP This installation is based on Ubuntu distribution with apt-get command. Definition PHP is the programming language of the LAMP stack, it’s a server-side scripting. On this page, we just explain the recipe of this component. Below, the header of the php-type : Properties The PHP recipe is not so complicated, it has only three properties. The first property is the version, like for Apache recipe, it’s just to be mentioned. The two other properties are booleans to install the PHP Apache 2 module or the PHP MySQL module. Property Usage Comment version Mention the php version Constant version in our example (v5) Lifecycle and related scripts PHP inherite from the tosca base type tosca.nodes.SoftwareComponent Operation Usage Comment create Executed script to install PHP on the Compute Through apt-get on ubuntu image "},{"title":"Component Wordpress","baseurl":"","url":"/documentation/1.4.0/devops_guide/lamp_stack_tutorial/lamp_stack_wordpress.html","date":null,"categories":[],"body":" To used Wordpress you need to upload the required recipes : Apache2 , Mysql and PHP . Definition The Wordpress is a special component of our LAMP stack. This component will allow to take the last zip of Wordpress to be uploaded on the Apache HTTP Server to be deployed. Used version for this tutorial : Wordpress Properties Wordpress properties : Property Usage Comment context_path Name of folder into the default folder of apache2 Empty as default zip_url URL from where you download the application zip Default : https://wordpress.org/latest.zip Relationship and related scripts Relationship Usage Comment WordpressHostedOnApache Use to describe that the Wordpress is deployed on the targeted Apache server Through apt-get and unzip WordpressConnectToMysql Use to describe the connection between Wordpress and Mysql Set the conf of Mysql into config files of Wordpress WordpressConnectToPHP Use to describe the connection between Wordpress and PHP Install the PHP module for Apache2 When you define a topology, make sure to select a WordpressHostedOn relation between Wordpress and Apache . "},{"title":"LDAP integration","baseurl":"","url":"/documentation/1.4.0/admin_guide/ldap.html","date":null,"categories":[],"body":"Alien 4 Cloud can interface with an external LDAP server in order to retrieve users and perform authentication. When using an LDAP server, the Alien admin can still manage ‘local’ users inside Alien while LDAP users should be managed inside the LDAP repository. It is possible also to delegate global rôle management inside Alien even for LDAP users or to define a mapping from roles inside LDAP to roles within Alien. LDAP configuration In order to plug-in ALIEN to your LDAP repository, you must configure the ldap section of the alien4cloud-config.yml file. Enable LDAP Enabling ldap is as easy as updating the ldap configuration section and changing the enabled flag to true. ### Ldap Configuration ldap : enabled : true ... ### End Ldap Configuration Configure LDAP Server The first step in order to configure LDAP in Alien 4 Cloud is to configure the server parameters: Keynames Keyname Required Description anonymousReadOnly yes Some LDAP server setups allow anonymous read-only access. If you want to use anonymous Contexts for read-only operations, set the anonymousReadOnly property to true. url yes Url of the ldap server. userDn yes Dn of the user to use in order to connect to the LDAP server. password yes Password of the user to use in order to connect to the LDAP server. Example ldap : ... anonymousReadOnly : true url : ldap://ldap.fastconnect.fr:389 userDn : uid=admin,ou=system password : secret ... Configure users retrieval In order to retrieve users from the LDAP you must specify the base in which to look for users as well as an optional filter to retrieve only the users entries (and filter inactive if you have some for example). Keynames Keyname Required Description base yes The base in which to look for users within your LDAP filter yes A filter query to be processed by your LDAP server to filter users retrieved into Alien 4 Cloud. Example ldap : ... base : ou=People,dc=fastconnect,dc=fr filter : (&(objectClass=person)(!(objectClass=CalendarResource))(accountStatus=active)) ... Configure user mapping Now that you can retrieve users from LDAP it is critical to define a Mapping from your LDAP user entry attributes to Alien 4 Cloud properties for a user. Keynames Keyname Required Description mapping: id: yes Name of the LDAP attribute that contains the unique id for the user within ldap that should be used as user’s login (username) within Alien 4 Cloud. mapping: firstname: yes Name of the LDAP attribute that contains the user’s firstname mapping: lastname: yes Name of the LDAP attribute that contains the user’s lastname mapping: email: yes Name of the LDAP attribute that contains the user’s email mapping: active: key: no Name of the LDAP attribute that allows to know if a user is active. mapping: active: value: no Value of the LDAP attribute for which the user is considered as active. mapping: roles: defaults: yes Roles to use when importing a user when no rôle mapping is defined. Note: this roles are used only on user import. When no role mapping is defined the roles of users can be managed in Alien4Cloud. mapping: roles: key: no Name of the LDAP attribute that contains the user’s roles. If this key is not specified, user roles will be managed inside alien. Note: at import users will be created inside alien with the default roles. mapping: roles: key: no Mapping of a LDAP rôle to an ALIEN rôle. Note that it is not currently possible to map a single LDAP rôle to multiple Alien roles. ### Ldap Configuration ldap : ... mapping : id : uid firstname : givenName lastname : sn email : mail # optional mapping key and value to dertermine if the user is active active : key : accountStatus value : active roles : defaults : COMPONENTS_BROWSER # optional configuration for role mapping (when you want to manage roles in ldap and not in alien for ldap users). #key: description #mapping: ROLE_CLOUDADMINS=ADMIN,ROLE_CLOUDCOMPONENTS=COMPONENTS_MANAGER ### End Ldap Configuration Limitations and known issues Even when a user has roles managed in LDAP an Alien admin can edit it’s roles. However when the user will log-in the roles from LDAP will be reloaded into alien. Roles changed in LDAP will not appear in Alien as long as the User doesn’t log-in. "},{"title":"Amazon (AWS)","baseurl":"","url":"/documentation/1.4.0/orchestrators/cloudify3_driver/location_amazon.html","date":null,"categories":[],"body":"The open source cloudify 3 orchestrator plugin allows you to deploy applications on Amazon. Configuration The configuration of the location is done while configuring the orchestrator, before or after activation. Normally there is nothing to do here, as the default provided configurations are good enough to have the location working. Tosca mapped / location exposed types The Amazon location exposes some types to help you configure a deployment and map the native Tosca types. Theses nodes are exposed as on demand resources on the location management view. Compute The tosca type tosca.nodes.Compute is mapped to the amazon nodes: alien.cloudify.aws.nodes.Compute for a linux compute, alien.cloudify.aws.nodes.WindowsCompute for a windows compute. To help you generate those, configuration resources alien.cloudify.aws.nodes.Image and alien.cloudify.aws.nodes.InstanceType can be created, and then used to auto generate Computes nodes. Network The tosca type tosca.nodes.Network can be mapped as two types of network: Public Network Exposed as the location type a public network alien.nodes.aws.PublicNetwork , which will result to the attribution of an elastic ip to the linked resource (compute). Private Network No supported yet. Volumes The tosca type tosca.nodes.BlockStorage can be mapped as two types of volumes: Deletable volumes Exposed as the location type alien.cloudify.aws.nodes.DeletableVolume , this is a volume that will ALWAYS be DELETED when undeploying the application, leading therefore to the lost of all data stored on it. Reusable volumes Exposed as the location type alien.cloudify.aws.nodes.Volume , this is a volume that will not be deleted at the end of the application life-cycle. It can therefore, between two deployment of the same application on the same environment and location, be re-used and attached to a compute, rending accessible the data previously stored on it. Scaling Scaling is now fully supported. Means we can scale a single Compute , or a Compute + Storage + IP-Address association. Known limitation when scaling a reusable volume When scaling a compute with a reusable volume, A4C will keep trace of the volume ID and zone (more details here ). Unfortunately the zone information are not correctly handled when the volumes are in the same availability zone thus make sure to check the volumes id and zones properties before redeploying your application. This limitation will be fixed very shortly. "},{"title":"Amazon (AWS)","baseurl":"","url":"/documentation/1.4.0/orchestrators/cloudify4_driver/location_amazon.html","date":null,"categories":[],"body":"The open source cloudify 4 orchestrator plugin allows you to deploy applications on Amazon. Configuration The configuration of the location is done while configuring the orchestrator, before or after activation. Normally there is nothing to do here, as the default provided configurations are good enough to have the location working. Tosca mapped / location exposed types The Amazon location exposes some types to help you configure a deployment and map the native Tosca types. Theses nodes are exposed as on demand resources on the location management view. Compute The tosca type tosca.nodes.Compute is mapped to the amazon nodes: alien.cloudify.aws.nodes.Compute for a linux compute, alien.cloudify.aws.nodes.WindowsCompute for a windows compute. To help you generate those, configuration resources alien.cloudify.aws.nodes.Image and alien.cloudify.aws.nodes.InstanceType can be created, and then used to auto generate Computes nodes. Network The tosca type tosca.nodes.Network can be mapped as two types of network: Public Network Exposed as the location type a public network alien.nodes.aws.PublicNetwork , which will result to the attribution of an elastic ip to the linked resource (compute). Private Network No supported yet. Volumes The tosca type tosca.nodes.BlockStorage can be mapped as two types of volumes: Deletable volumes Exposed as the location type alien.cloudify.aws.nodes.DeletableVolume , this is a volume that will ALWAYS be DELETED when undeploying the application, leading therefore to the lost of all data stored on it. Reusable volumes Exposed as the location type alien.cloudify.aws.nodes.Volume , this is a volume that will not be deleted at the end of the application life-cycle. It can therefore, between two deployment of the same application on the same environment and location, be re-used and attached to a compute, rending accessible the data previously stored on it. Scaling Scaling is now fully supported. Means we can scale a single Compute , or a Compute + Storage + IP-Address association. Known limitation when scaling a reusable volume When scaling a compute with a reusable volume, A4C will keep trace of the volume ID and zone (more details here ). Unfortunately the zone information are not correctly handled when the volumes are in the same availability zone thus make sure to check the volumes id and zones properties before redeploying your application. This limitation will be fixed very shortly. "},{"title":"Location / resources autorization","baseurl":"","url":"/documentation/1.4.0/user_guide/location_autorization.html","date":null,"categories":[],"body":" Location Every new location is private, you need to authorize some entities to use it. This entities can be one of the following : user group application application environment application environment type The choice of these entities allows a security policy for location adapted to all situations. Authorize user or group Each location have a security panel. In this view you can manage authorizations for all entities. The view is splitted into tabs, each tab reffering to one entity type. Click on the button to grant users or on this one to revoke authorizations of an user. This is exactly the same logic to manage authorizations on groups . Authorize application environment, application environment type or application The third tab is for application, application environment and application environment type. An application can have many environments, so if you authorize an application for a location, this authorization is valid for all of this nested resources (this include application environment and application environment type). You can however specifically choose to authorize only a set of environments of one application. In this case, the authorized environments are displayed by a badge. Finally, you can choose to authorize a set of environment types of one application. All existing and futur environments with the the authorize environment type will be able to use the ressource. Manage ressource by environment type avoids the admin from always having to set new authorizations for new environments. Authorized environment types are displayed by a badge. Click on this icon to update rigths on an authorized application or on this one to revoke all authorizations for this application (this inclued nested resources). Location resources As for the location, all location resources are private. The mechanism to change the authorizations on a location resource is the same as for location. Grant an authorization on a location resource for an entity without authorization on location will automatically grant the entity for the location. To manage the authorizations on a specific resource or display this current authorizations, click on your resource (like to edit this properties) and go to the security panel. "},{"title":"AWS","baseurl":"","url":"/documentation/1.4.0/orchestrators/puccini/location_aws.html","date":null,"categories":[],"body":"The puccini orchestrator plugin allows you to deploy the applications on AWS. Configuration The configuration of the location needs to be done while configuring the orchestrator before its activation. You need to fill in the informations with your AWS account. In the configuration of orchestrator, go to locationConfiguration -> aws -> defaultConfiguration . accessKeyId : Your access key id for AWS accessKeySecret : The content of your access key region : The name of your AWS region Tosca mapped / location exposed types The Amazon location exposes some types to help you configure a deployment and map the native Tosca types. Theses nodes are exposed as on demand resources on the location management view. Compute The tosca type tosca.nodes.Compute is mapped to the amazon nodes: org.alien4cloud.puccini.aws.nodes.Instance for a compute To configure a resource, you need to provide the information for the mandatory properties (with star): image_id : Image id for bootstrapping an instance of AWS instance_type : The type for instance of AWS key_name : The key pair name security_groups : Normally, it requires to put only one security group. user_data : We put the script needed to bootstrap an AWS instance. Normally with this script: puccini_concurrent_restriction : The number of the task can be executed concurrently on the compute instance. #cloud-config runcmd: - echo 'Defaults:ec2-user !requiretty' > /etc/sudoers.d/999-puccini-cloud-init-requiretty user : The user name for login on the instance. key_content : The private key of authentication for login on the instance. Pay attention when doing the copy paste. You need to select the multi-line mode before filling in the private key because the private key contains multiple line. Network The tosca type tosca.nodes.Network can be mapped as the public network: Public Network Exposed as the location type a public network org.alien4cloud.puccini.aws.nodes.PublicNetwork , which will result to the attribution of an elastic ip to the linked resource (compute). Normally, we don’t need to configure this resource. "},{"title":"Azure","baseurl":"","url":"/documentation/1.4.0/orchestrators/cloudify4_driver/location_azure.html","date":null,"categories":[],"body":" Premium feature Azure is a Premium feature. Configuration The configuration of the location is done while configuring the orchestrator, before or after activation. Note that for the azure location, some properties are required. In addition to the base configuration that are DSL and imports, you’ll need to provide some informations from your bootstrapped manager (available on your Azure portal, or contact your administrator): location : The location in which the cloudify manager is bootstrapped (i.e: westeurope, northeurope, eastus, etc). For now, all the deployed application will be done in that location. resourceGroup : The resource group name in which your manager is. virtualNetwork : The virtual network name to which the manager is linked. subnet : The subnet name on which the manager is connected. Here are the minimum required properties you must configure before enable the location Tosca mapped types The Azure location exposes some types to help you configure a deployment and map the native Tosca types. Theses nodes are exposed as on demand resources the location management view. Compute The tosca type tosca.nodes.Compute is mapped to the azure nodes: alien.cloudify.azure.nodes.Compute for a linux compute, alien.cloudify.azure.nodes.WindowsCompute for a windows compute. Their configurations required to be familiar with the azure way to describe an image (see image reference ) and flavor (see hardware profile ) Note that if the property storage_account_name is not set on the compute node, Alien4Cloud will create a temporary Storage Account to store the os-disk files. This storage account will be deleted when undeploying the application. To help you getting started with Azure, here a step by step instructions to quickly add an Ubuntu and a Windows compute resources: Configuring an Ubuntu compute resource Go to the your orchestrator locations screen. You should see a screen like this: Select the resource type alien.cloudify.azure.nodes.Image in the combox box and click the add button. Then set the properties as in the screenshot: Next step is to set an Ubuntu image id like the following: Add a alien.cloudify.azure.nodes.hardwareProfile and configure it like the following: Go to the On demand tab and click the auto_config button You should notice that the user property is mandatory. Let’s set a ubuntu user. Make sure you check the pubkey auth only checkbox. Let’s configure a compute which use an public key auth only. Click the configuration property of the Compute node and add a public key as in the screenshot. The path property specifies the full path on the created VM where ssh public key is stored. The keyData property is the SSH public key certifiate used to authenticate with the VM through ssh. The key needs to be at least 2048-bit and in ssh-rsa format. The keyData value must correspond to the private key file that the manager will use to connect to the target VM. By default the manager will use a agent key defined during bootstrap. You can override the private key that the manager will use by overriding the property cloudify_agent > key on the compute’s node. This property specifies a path to the private key that will be used to connect to the host. Make sure that the path to the key exist on the manager. You can use the following to get the keyData for Azure: ssh-keygen -y -f <your private keyfile> Configuring a Windows compute resource Let’s configure a Windows compute resource for Azure using a user/password. Networks Public Network Exposed as the location type a public network alien.cloudify.azure.nodes.PublicNetwork , which will result to the attribution of public ip to the linked resource (compute). Private Network The tosca type tosca.nodes.Network can also be mapped as a private network using a location node of type alien.cloudify.azure.nodes.PrivateNetwork . Volumes The tosca type tosca.nodes.BlockStorage can be mapped as two types of volumes: Deletable volumes Exposed as the location type alien.cloudify.azure.nodes.DeletableVolume , this is a volume that will ALWAYS be DELETED when undeploying the application, leading therefore to the lost of all data stored on it. If the property storage_account_name is not set. Alien4Cloud will create a temporary Storage Account to store the datadisk files. This storage account will be deleted when undeploying the application. Reusable volumes Exposed as the location type alien.cloudify.azure.nodes.Volume , this is a volume that will not be deleted at the end of the application life-cycle. It can therefore, between two deployment of the same application on the same environment and location, be re-used and attached to a compute, rending accessible the data previously stored on it. When using reusable volumes, the property storage_account_name is mandatory. The storage account won’t be deleted when undeploying the application and it will keep the datadisk file. Scaling Scaling is now supported with Azure. When scaling a compute it will scale all resources linked to the compute (the Compute itself, the volumes, the IPs and nodes that are hosted on the compute). Known scaling limitations due to the combination of 2 contraints using Cloudify’s scaling group: One node can only belongs to one group at a time Storage Account nodes must be part of the scaling group. When having multiple scalable computes in a single application, each compute and its volumes must be place inside a distinct storage account from the each other. For instance: Scalable compute A has 2 Volumes in storage account S1. Scalable compute B must be in a different storage account from S1. Scalable compute C can have its osdisk stored in storage account S3 and its volumes in different storage accounts S3’ and S3’’ as long as it is different than the ones used by compute A and B and their volumes. "},{"title":"Azure","baseurl":"","url":"/documentation/1.4.0/orchestrators/cloudify3_driver/location_azure.html","date":null,"categories":[],"body":" Premium feature Azure is a Premium feature. Configuration The configuration of the location is done while configuring the orchestrator, before or after activation. Note that for the azure location, some properties are required. In addition to the base configuration that are DSL and imports, you’ll need to provide some informations from your bootstrapped manager (available on your Azure portal, or contact your administrator): location : The location in which the cloudify manager is bootstrapped (i.e: westeurope, northeurope, eastus, etc). For now, all the deployed application will be done in that location. resourceGroup : The resource group name in which your manager is. virtualNetwork : The virtual network name to which the manager is linked. subnet : The subnet name on which the manager is connected. Here are the minimum required properties you must configure before enable the location Tosca mapped types The Azure location exposes some types to help you configure a deployment and map the native Tosca types. Theses nodes are exposed as on demand resources the location management view. Compute The tosca type tosca.nodes.Compute is mapped to the azure nodes: alien.cloudify.azure.nodes.Compute for a linux compute, alien.cloudify.azure.nodes.WindowsCompute for a windows compute. Their configurations required to be familiar with the azure way to describe an image (see image reference ) and flavor (see hardware profile ) Note that if the property storage_account_name is not set on the compute node, Alien4Cloud will create a temporary Storage Account to store the os-disk files. This storage account will be deleted when undeploying the application. To help you getting started with Azure, here a step by step instructions to quickly add an Ubuntu and a Windows compute resources: Configuring an Ubuntu compute resource Go to the your orchestrator locations screen. You should see a screen like this: Select the resource type alien.cloudify.azure.nodes.Image in the combox box and click the add button. Then set the properties as in the screenshot: Next step is to set an Ubuntu image id like the following: Add a alien.cloudify.azure.nodes.hardwareProfile and configure it like the following: Go to the On demand tab and click the auto_config button You should notice that the user property is mandatory. Let’s set a ubuntu user. Make sure you check the pubkey auth only checkbox. Let’s configure a compute which use an public key auth only. Click the configuration property of the Compute node and add a public key as in the screenshot. The path property specifies the full path on the created VM where ssh public key is stored. The keyData property is the SSH public key certifiate used to authenticate with the VM through ssh. The key needs to be at least 2048-bit and in ssh-rsa format. The keyData value must correspond to the private key file that the manager will use to connect to the target VM. By default the manager will use a agent key defined during bootstrap. You can override the private key that the manager will use by overriding the property cloudify_agent > key on the compute’s node. This property specifies a path to the private key that will be used to connect to the host. Make sure that the path to the key exist on the manager. You can use the following to get the keyData for Azure: ssh-keygen -y -f <your private keyfile> Configuring a Windows compute resource Let’s configure a Windows compute resource for Azure using a user/password. Networks Public Network Exposed as the location type a public network alien.cloudify.azure.nodes.PublicNetwork , which will result to the attribution of public ip to the linked resource (compute). Private Network The tosca type tosca.nodes.Network can also be mapped as a private network using a location node of type alien.cloudify.azure.nodes.PrivateNetwork . Volumes The tosca type tosca.nodes.BlockStorage can be mapped as two types of volumes: Deletable volumes Exposed as the location type alien.cloudify.azure.nodes.DeletableVolume , this is a volume that will ALWAYS be DELETED when undeploying the application, leading therefore to the lost of all data stored on it. If the property storage_account_name is not set. Alien4Cloud will create a temporary Storage Account to store the datadisk files. This storage account will be deleted when undeploying the application. Reusable volumes Exposed as the location type alien.cloudify.azure.nodes.Volume , this is a volume that will not be deleted at the end of the application life-cycle. It can therefore, between two deployment of the same application on the same environment and location, be re-used and attached to a compute, rending accessible the data previously stored on it. When using reusable volumes, the property storage_account_name is mandatory. The storage account won’t be deleted when undeploying the application and it will keep the datadisk file. Scaling Scaling is now supported with Azure. When scaling a compute it will scale all resources linked to the compute (the Compute itself, the volumes, the IPs and nodes that are hosted on the compute). Known scaling limitations due to the combination of 2 contraints using Cloudify’s scaling group: One node can only belongs to one group at a time Storage Account nodes must be part of the scaling group. When having multiple scalable computes in a single application, each compute and its volumes must be place inside a distinct storage account from the each other. For instance: Scalable compute A has 2 Volumes in storage account S1. Scalable compute B must be in a different storage account from S1. Scalable compute C can have its osdisk stored in storage account S3 and its volumes in different storage accounts S3’ and S3’’ as long as it is different than the ones used by compute A and B and their volumes. "},{"title":"BYON","baseurl":"","url":"/documentation/1.4.0/orchestrators/cloudify3_driver/location_byon.html","date":null,"categories":[],"body":"The open source cloudify 3 orchestrator plugin allows you to deploy applications on existing machines. Configuration The configuration of the location is done while configuring the orchestrator, before or after activation. Normally there is nothing to do here, as the default provided configurations are good enough to have the location working. Tosca mapped / location exposed types The BYON location exposes some types to help you configure a deployment and map the native Tosca types. These nodes are exposed as on demand resources on the location management view. Compute The tosca type tosca.nodes.Compute is mapped to the openstack nodes: alien.cloudify.byon.nodes.LinuxCompute for a linux compute, alien.cloudify.byon.nodes.WindowsCompute for a windows compute. Note that on each Compute, you will have to fill the host_pool_service_endpoint properties. This is the url to the Host-Pool Service. More information about the Host-Pool Service on cloudify official documentation or on their github project Alien4Cloud has a blueprint to help you deploy the Host-Pool Service that can be found on our github . "},{"title":"BYON","baseurl":"","url":"/documentation/1.4.0/orchestrators/cloudify4_driver/location_byon.html","date":null,"categories":[],"body":"The open source cloudify 4 orchestrator plugin allows you to deploy applications on existing machines. Configuration The configuration of the location is done while configuring the orchestrator, before or after activation. Normally there is nothing to do here, as the default provided configurations are good enough to have the location working. Tosca mapped / location exposed types The BYON location exposes some types to help you configure a deployment and map the native Tosca types. These nodes are exposed as on demand resources on the location management view. Compute The tosca type tosca.nodes.Compute is mapped to the openstack nodes: alien.cloudify.byon.nodes.LinuxCompute for a linux compute, alien.cloudify.byon.nodes.WindowsCompute for a windows compute. Note that on each Compute, you will have to fill the host_pool_service_endpoint properties. This is the url to the Host-Pool Service. More information about the Host-Pool Service on cloudify official documentation or on their github project Alien4Cloud has a blueprint to help you deploy the Host-Pool Service that can be found on our github . "},{"title":"BYON","baseurl":"","url":"/documentation/1.4.0/orchestrators/puccini/location_byon.html","date":null,"categories":[],"body":"The puccini orchestrator plugin allows you to deploy applications on existing machines. Prerequisites The physical or virtual machines should be given before doing the configuration for the resources. It is easy to bring on your virtual machines with vagrant . Tosca mapped / location exposed types The BYON location exposes some types to help you configure a deployment and map the native Tosca types. These nodes are exposed as on demand resources on the location management view. Compute The resource type org.alien4cloud.puccini.byon.nodes.Compute is provided for mapping a compute. ip_address : IP address for the machines user : The user for login onto the machine key_content : The private key for login onto the machine puccini_concurrent_restriction : The number of the task can be executed concurrently on the compute instance. "},{"title":"Docker","baseurl":"","url":"/documentation/1.4.0/orchestrators/puccini/location_docker.html","date":null,"categories":[],"body":"The puccini orchestrator plugin allows you to deploy applications on Docker. Tosca mapped / location exposed types The Docker location exposes some types to help you configure a deployment and map the native Tosca types. Theses nodes are exposed as on demand resources on the location management view. Compute The tosca type tosca.nodes.Compute is mapped to the Docker nodes: org.alien4cloud.puccini.docker.nodes.Container for a docker container Explanation to the properties: image_id : The image id for running the container tag : The tag for the docker image interactive : exposed_ports : Put the exposed port of the docker container port_mappings : Mapping the exposed port of the docker container to the port of the host. from : Put the exposed port of the docker container to : Put the mapped port of the host pull_image : If selected, it will pull the docker image from docker hub, else local image will be used. puccini_concurrent_restriction : The number of the task can be executed concurrently on the compute instance. Network The tosca type tosca.nodes.Network can be mapped as org.alien4cloud.puccini.docker.nodes.Network : Explanation to the properties: network_name : The name of the network cidr : The ip range of the network Volumes TODO Deletable volumes TODO "},{"title":"Openstack","baseurl":"","url":"/documentation/1.4.0/orchestrators/cloudify3_driver/location_openstack.html","date":null,"categories":[],"body":"The open source cloudify 3 orchestrator plugin allows you to deploy applications on openstack. Configuration The configuration of the location is done while configuring the orchestrator, before or after activation. Normally there is nothing to do here, as the default provided configurations are good enough to have the location working. Tosca mapped / location exposed types The Openstack location exposes some types to help you configure a deployment and map the native Tosca types. Theses nodes are exposed as on demand resources on the location management view. Compute The tosca type tosca.nodes.Compute is mapped to the openstack nodes: alien.nodes.openstack.Compute for a linux compute, alien.nodes.openstack.WindowsCompute for a windows compute. To help you generate those, configuration resources alien.nodes.openstack.Image and alien.nodes.openstack.Flavor can be created, and then used to auto generate Computes nodes. Network The tosca type tosca.nodes.Network can be mapped as two types of network: Public Network Exposed as the location type a public network alien.nodes.openstack.PublicNetwork , which will result to the attribution of a floating ip to the linked resource (compute). Make sure to fill in the required property floatingip , by providing the name of an existing network name on which the linked resources will be connected. Private Network The tosca type tosca.nodes.Network can also be mapped as a private network using a location node of type alien.nodes.openstack.PrivateNetwork . Volumes The tosca type tosca.nodes.BlockStorage can be mapped as two types of volumes: Deletable volumes Exposed as the location type alien.cloudify.openstack.nodes.DeletableVolume , this is a volume that will ALWAYS be DELETED when undeploying the application, leading therefore to the lost of all data stored on it. Reusable volumes Exposed as the location type alien.cloudify.openstack.nodes.Volume , this is a volume that will not be deleted at the end of the application life-cycle. It can therefore, between two deployment of the same application on the same environment and location, be re-used and attached to a compute, rending accessible the data previously stored on it. Scaling Scaling is now fully supported. Means we can scale a single Compute , or a Compute + Storage + IP-Address association. Availability zone You can add a alien.cloudify.openstack.nodes.AvailabilityZone with the value of your availability zone on OpenStack. To use the non affitinity placement policy, at least two zones are necessery. After that, you can add your node (on the topology view) to the a same group, Alien will try to put this server on different zones during the deployment. When you redeploy an application with volume, Alien try to put all volumes attached to a server on the same zone, and, if a volume has already a zone, on the zone of this volume. The algorithm of placement policy equitably distributed the server on eatch zones. "},{"title":"Openstack","baseurl":"","url":"/documentation/1.4.0/orchestrators/cloudify4_driver/location_openstack.html","date":null,"categories":[],"body":"The open source cloudify 4 orchestrator plugin allows you to deploy applications on openstack. Configuration The configuration of the location is done while configuring the orchestrator, before or after activation. Normally there is nothing to do here, as the default provided configurations are good enough to have the location working. Tosca mapped / location exposed types The Openstack location exposes some types to help you configure a deployment and map the native Tosca types. Theses nodes are exposed as on demand resources on the location management view. Compute The tosca type tosca.nodes.Compute is mapped to the openstack nodes: alien.nodes.openstack.Compute for a linux compute, alien.nodes.openstack.WindowsCompute for a windows compute. To help you generate those, configuration resources alien.nodes.openstack.Image and alien.nodes.openstack.Flavor can be created, and then used to auto generate Computes nodes. Network The tosca type tosca.nodes.Network can be mapped as two types of network: Public Network Exposed as the location type a public network alien.nodes.openstack.PublicNetwork , which will result to the attribution of a floating ip to the linked resource (compute). Make sure to fill in the required property floatingip , by providing the name of an existing network name on which the linked resources will be connected. Private Network The tosca type tosca.nodes.Network can also be mapped as a private network using a location node of type alien.nodes.openstack.PrivateNetwork . Volumes The tosca type tosca.nodes.BlockStorage can be mapped as two types of volumes: Deletable volumes Exposed as the location type alien.cloudify.openstack.nodes.DeletableVolume , this is a volume that will ALWAYS be DELETED when undeploying the application, leading therefore to the lost of all data stored on it. Reusable volumes Exposed as the location type alien.cloudify.openstack.nodes.Volume , this is a volume that will not be deleted at the end of the application life-cycle. It can therefore, between two deployment of the same application on the same environment and location, be re-used and attached to a compute, rending accessible the data previously stored on it. Scaling Scaling is now fully supported. Means we can scale a single Compute , or a Compute + Storage + IP-Address association. Availability zone You can add a alien.cloudify.openstack.nodes.AvailabilityZone with the value of your availability zone on OpenStack. To use the non affitinity placement policy, at least two zones are necessery. After that, you can add your node (on the topology view) to the a same group, Alien will try to put this server on different zones during the deployment. When you redeploy an application with volume, Alien try to put all volumes attached to a server on the same zone, and, if a volume has already a zone, on the zone of this volume. The algorithm of placement policy equitably distributed the server on eatch zones. "},{"title":"Openstack","baseurl":"","url":"/documentation/1.4.0/orchestrators/puccini/location_openstack.html","date":null,"categories":[],"body":"The puccini orchestrator plugin allows you to deploy applications on Openstack. Configuration The configuration of the location needs to be done while configuring the orchestrator before its activation. You need to fill in the informations with your Openstack account. In the configuration of orchestrator, go to locationConfiguration -> openstack -> defaultConfiguration . keystoneUrl : The url of the keystone service tenant : The tenant where the applications will be deployed. user : The username of your account password : The password of your account region : The name of the region Tosca mapped / location exposed types The Openstack location exposes some types to help you configure a deployment and map the native Tosca types. Theses nodes are exposed as on demand resources on the location management view. Compute The tosca type tosca.nodes.Compute is mapped to the openstack nodes: org.alien4cloud.puccini.openstack.nodes.Compute for a linux compute Normally, you need to provide the basic information for the resource: image : The image id flavor : The flavor id key_pair_name : The name of the key pair security_group_names : You can provide the security group for this resource user : The user name for login on the instance. key_content : The private key of authentication for login on the instance. Pay attention when doing the copy paste. You need to select the multi-line mode before filling in the private key because the private key contains multiple line. puccini_concurrent_restriction : The number of the task can be executed concurrently on the compute instance. Network The tosca type tosca.nodes.Network can be mapped as two types of network: org.alien4cloud.puccini.openstack.nodes.ExternalNetwork org.alien4cloud.puccini.openstack.nodes.Network Public Network Exposed as the location type a public network org.alien4cloud.puccini.openstack.nodes.ExternalNetwork , which will result to the attribution of a floating ip to the linked resource (compute). Make sure to fill in the required property network_name which will be the same name of the public network on Openstack, by providing the name of an existing network name on which the linked resources will be connected. Private Network The tosca type org.alien4cloud.puccini.openstack.nodes.Network is a mapping node for the private network. Normally, the property cidr and network_name should be given. Volumes TODO. Deletable volumes TODO. "},{"title":"vSphere","baseurl":"","url":"/documentation/1.4.0/orchestrators/cloudify3_driver/location_vsphere.html","date":null,"categories":[],"body":" Premium feature vSphere is a Premium feature. Configuration The configuration of the location is done while configuring the orchestrator, before or after activation. Normally there is nothing to do here, as the default provided configurations are good enough to have the location working. Tosca mapped / location exposed types The vSphere location exposes some types to help you configure a deployment and map the native Tosca types. Theses nodes are exposed as on demand resources on the location management view. Compute The tosca type tosca.nodes.Compute is mapped to the openstack nodes: alien.cloudify.vsphere.nodes.Compute for a linux compute, alien.cloudify.vsphere.nodes.WindowsCompute for a windows compute. To help you generate those, configuration resources alien.cloudify.vsphere.nodes.Image and alien.cloudify.vsphere.nodes.InstanceType can be created, and then used to auto generate Computes nodes. Network Not supported yet Volumes The tosca type alien.cloudify.vsphere.nodes.Volume can be mapped as two types of volumes: Deletable volumes Exposed as the location type alien.cloudify.openstack.nodes.DeletableVolume , this is a volume that will ALWAYS be DELETED when undeploying the application, leading therefore to the lost of all data stored on it. Reusable volumes Exposed as the location type alien.cloudify.openstack.nodes.Volume , this is a volume that will not be deleted at the end of the application life-cycle. It can therefore, between two deployment of the same application on the same environment and location, be re-used and attached to a compute, rending accessible the data previously stored on it. Known limitation: Using multiple volumes per compute is not supported at the moment due to some contraint on Cloudify’s vSphere plugin. Scaling For now, scaling is supported on for a single compute, i.e. a compute which is not linked to a network or doesn’t have any volumes attached to it. This should be fixed with the next cloudify version. "},{"title":"vSphere","baseurl":"","url":"/documentation/1.4.0/orchestrators/cloudify4_driver/location_vsphere.html","date":null,"categories":[],"body":" Premium feature vSphere is a Premium feature. Configuration The configuration of the location is done while configuring the orchestrator, before or after activation. Normally there is nothing to do here, as the default provided configurations are good enough to have the location working. Tosca mapped / location exposed types The vSphere location exposes some types to help you configure a deployment and map the native Tosca types. Theses nodes are exposed as on demand resources on the location management view. Compute The tosca type tosca.nodes.Compute is mapped to the openstack nodes: alien.cloudify.vsphere.nodes.Compute for a linux compute, alien.cloudify.vsphere.nodes.WindowsCompute for a windows compute. To help you generate those, configuration resources alien.cloudify.vsphere.nodes.Image and alien.cloudify.vsphere.nodes.InstanceType can be created, and then used to auto generate Computes nodes. Network Not supported yet Volumes The tosca type alien.cloudify.vsphere.nodes.Volume can be mapped as two types of volumes: Deletable volumes Exposed as the location type alien.cloudify.openstack.nodes.DeletableVolume , this is a volume that will ALWAYS be DELETED when undeploying the application, leading therefore to the lost of all data stored on it. Reusable volumes Exposed as the location type alien.cloudify.openstack.nodes.Volume , this is a volume that will not be deleted at the end of the application life-cycle. It can therefore, between two deployment of the same application on the same environment and location, be re-used and attached to a compute, rending accessible the data previously stored on it. Known limitation: Using multiple volumes per compute is not supported at the moment due to some contraint on Cloudify’s vSphere plugin. Scaling For now, scaling is supported on for a single compute, i.e. a compute which is not linked to a network or doesn’t have any volumes attached to it. This should be fixed with the next cloudify version. "},{"title":"Docker support","baseurl":"","url":"/documentation/1.4.0/orchestrators/marathon_driver/marathon_docker_support.html","date":null,"categories":[],"body":"We are not following the exact types from TOSCA as we are not fully inline with their current implementation. We plan anyway to provide compatibility with TOSCA types later. Current TOSCA types for containers implies specific Capabilities and Requirements . We believe this to be a wrong approach and that the difference between a Docker-based component and a non-Docker component should be handled by the orchestrator. For instance, deploying a MongoDb via a Docker image in a container or through Scripts in a compute should in our opinion lead to a MongoCapability that derives from tosca.capabilities.Endpoint in both implementations. In the future, this approach will lead to the ability to deploy topologies with both containers and non-container nodes (based on orchestrators support). Docker-types description The node_type tosca.nodes.Container.Application.DockerContainer is derived_from the existing type tosca.nodes.Container.Application . Defining operations The implementation value of the Create operation takes the Docker image, as explained below. Other operations are not supported ATM. Technical configuration can be given to the container using the Create operation inputs. Inputs will be converted into environment variables, docker run arguments, or docker options depending on the input’s prefix. Respectively : ENV_{INPUT_NAME} will result in an environment variable inside the container named INPUT_NAME ARG_{INPUT_NAME} will result in the input value being passed as an ENTRYPOINT argument (those are unamed) OPT_{INPUT_NAME} will result in an arbitrary command-line option named INPUT_NAME for the docker run command. Inputs should be used for technical configuration (such as a port definition) or configuration that should be auto-resolved by the orchestrator. For user specific configuration, use the new docker_options, docker_env_vars or docker_run_args instead. See Nodecellar sample for an example. To provide the container with a custom command to run, use the docker_run_cmd property. Docker images Docker images are defined using a new artifact type combined with a repository definition. To specify which image a container use, you must defined the image as the implementation artifact of the container’s Create operation, like this: create : implementation : file : mongo:latest repository : docker type : tosca.artifacts.Deployment.Image.Container.Docker Adding External Storage as Docker Volumes You can add External Volumes to your containers by using the alien.nodes.DockerExtVolume type. You must however define a volume name (that must be unique in your provider) and a container mount point. Getting a property from a requirement target To implement a dependency from a container to another component in a flexible way, we want to allow users to use either environment variables, docker run arguments or a custom command, as they see fit. To achieve this, we need a way to request a property of a requirement TARGET from within the SOURCE properties definition. see Nodecellar sample for an example. This means that within a node definition, given a requirement name, we want to access a property defined in the TARGET of such requirement. For example : { get_property: [REQ_TARGET, mongo_db, port] } should return the port property of the TARGET of the mongo_db requirement, which is a capability. If the property cannot be found, we will look for it in the TARGET node itself. Tuning the container We added a set of properties to allow user configuration of the containers within the Alien editor. Docker CLI arguments It is possible to define arguments for the Docker CLI using the docker_cli_args property as a map of key/pairs. It is also possible (and recommended) to create a custom datatype if specific CLI args are expected for the application ( see the Nodecellar example ). Docker run command To define a command to be executed inside the container, use the docker_run_cmd property. This will override a CMD statement in the container’s Dockerfile. The value will be wrapped into /bin/sh -c '${cmd}' . Docker run args If your container’s Dockerfile uses an ENTRYPOINT, you can specify arguments using the docker_run_args property. Those will be appended to the docker run command. Docker env variables To set environment variables inside the container, use the docker_env_vars property map. Defining capabilities Modularity We aim for topologies where Docker containers and non-docker apps can live together. As such, capabilities for Docker containers should inherit usual capabilities. For instance, in the Nodecellar sample , we defined : The alien.capabilities.endpoint.Mongo capability, which inherits tosca.capabilities.Endpoint and is the generic ability to expose a Mongo database, The alien.capabilities.endpoint.docker.Mongo capability, which derives from the latter. This capability is exposed by the mongo_db capability of the MongoDocker Node-type. Using inheritance, this means that any other Node-type requiring alien.capabilities.endpoint.Mongo can use the MongoDocker through a classic ConnectsTo relationship. Bridging between container and host ports To allow bridge networking between the container and it’s host, we added the docker_port_mapping property to the alien.capabilities.endpoint.docker.Mongo relationship. This will be interpreted by the Orchestrator as the Host port, while the port property (which we inherited from the Endpoint capability) represents the port inside the container. If the value is 0, the Orchestrator will randomly allocate a port. If no value is specified, then Host networking will be used. Hosting requirements As per the usual host requirement, we decided to use a lower_bound value of 0. This allows definition of topologies with containers only, which we will be able to deploy onto already provisioned clusters, like Mesos or Kubernetes. "},{"title":"Mesos + Marathon","baseurl":"","url":"/documentation/1.4.0/orchestrators/marathon_driver/marathon_driver.html","date":null,"categories":[],"body":"Mesos is like a kernel for the datacenter. It provides fine-grained abstraction of the datacenter resources, isolation and native support for Docker containers. Marathon is an open-source meta-framework for Mesos dedicated to container orchestration. It is developed and maintained by Mesosphere. Marathon is a production-ready container orchestration platform for Mesos with fist-class Docker support. It features automated health-checks and failure recovery, allowing seamless execution of services or long-running jobs. Being a meta-framework, Marathon is also proficient in running other Mesos frameworks, such as Chronos. Combined, Mesos and Marathon can turn any datacenter into a highly available, scalable and fault-tolerant PaaS for cloud applications. We developed a Marathon orchestrator plugin for Alien 4 Cloud, as part of our 1.4.0 roadmap to achieve Docker support. This project is at an alpha stage and still undergoing development. We might add, change, or delete any functionality described in this document. Any feedback would be highly appreciated ! Alien 4 Cloud Marathon support The plugin features deployment and management of complex topologies with containers in Marathon by leveraging docker-tosca-types . Currently, we only support running Docker containers. Topologies deployed with the plugin benefit from Marathon’s fault-recovery features. This means that Marathon will gracefully re-schedule and restart (possibly, on a different agent) a failing container. We made a demonstration video showcasing the deployment of Nodecellar using the plugin. Service discovery TL;DR : Service discovery is pretty much automatic using the Marathon plugin with MesosDNS & MarathonLB running on your cluster. Service discovery between containers launched by the plugin through Marathon is achieved using MesosDNS and MarathonLB , respectively a DNS service and a HAProxy load balancer that provides port-based service discovery. Both are running in the cluster as Marathon tasks. Containers launched with Marathon are all resolvable through DNS resolution using the app_name.marathon.mesos pattern (but you will still need to know the containers’ allocated port, which should be randomly assigned by Marathon), or using MarathonLB as a reverse proxy, using a well-know service port. When assigned a service port, containers running in Marathon can be accessed by reaching MarathonLB on the said service port. Because MarathonLB itself is running on Marathon, it’s IP address is also resolvable through MesosDNS. This means that containers with service ports can be accessed using the pattern * .marathon.mesos: * where is MarathonLB's app ID in Marathon. This whole mechanism being relatively complex, the plugin will automatically assign a service port to containers that are targeted by a ConnectsTo relationship from at least another container in the topology. Respectively, the plugin will also replace any reference to the target’s endpoint port and ip_address attributes with the service port and MarathonLB DNS name, respectively. External Storage support TL;DR : We added experimental support of the external storage feature from Marathon. We currently use REX-Ray as a Docker Volume Driver. The REX-Ray service needs IAAS credentials to operate. Please note that while REX-Ray is able to dynamically provision volumes on your provider, those will NOT be cleaned up upon undeployment. REX-Ray currently only supports AWS, although it is moving forward quite rapidly, and new features or providers are likely to be added in the near future. As of today, REX-Ray does not support high availability. REX-Ray is built on a client-server architecture, with clients operating as Docker Volume Drivers that use the libStorage service. The libStorage service (which we also call RexRayServer for simplicity) runs alongside the Master(s) node(s) and is able to provision and mount storage resources on the fly. It needs to be configured and given proper credentials to manage storage providers. Storage providers configuration is best described in libStorage’s documentation . HTTP Health checks In addition to Mesos tasks states, Marathon features automatic HTTP health checks against running containers. A health check is considered passing if its HTTP response code is between 200 and 399 inclusive, and its response is received within the a timeout period. The plugin adds a default health check to all the containers in the topology. Known limitations and caveats It is not possible to scale Docker containers. This is due to Marathon only allowing singletons when using external volumes in conjunction with containers. We did not exactly follow the TOSCA model for Docker containers as it is still incubating. It not possible to stop the deployment of an application. Wait for it to be deployed then hit un-deploy. The connexion to Marathon is NOT secured. Health checks events are not parsed. However, the health of each instance is polled when refreshing the runtime view. "},{"title":"Getting started","baseurl":"","url":"/documentation/1.4.0/getting_started/new_getting_started.html","date":null,"categories":[],"body":"This guide explains how to get started with Alien4Cloud and deploy your first application. The goal of this guide is not to provide an extensive cover of all the functionalities. Install, launch and configure alien4cloud Prerequisites Operating system : Linux or MacOS version 10.12 (we use a native library for docker communication that has been compiled on this version and we are aware of issues on earlier versions). Curl : Our installation script leverage curl so ensure you have the command installed. Python : While we don’t really require python for alien4cloud our getting started script leverage python to pre-configure some elements in alien4cloud for you. Running the script without python will install alien4cloud, start it and then fail to configure resources so you’ll have to configure them manually. Java 8 : We don’t install java for you so just make sure you have a 8 or higher JDK installed on your working station. If you don’t you can following instructions here . Docker : Our getting started leverage a minimal TOSCA orchestrator that creates docker images to orchestrate deployments in an independent way one from another. We will also use docker container in place of VMs to launch TOSCA blueprints. You need an up-to-date docker, especially if you are running on mac as we leverage the new docker for mac and unix socket communication. We have tested on Docker version 17.03.1-ce, build c6d412e . Ports : Nothing running on port 8088: That’s alien4cloud default port and as we just launch a4c in our getting started script we need this port free. Browser : A supported web browser (check versions here ). Install, launch and configure alien4cloud Open a terminal and launch the following command: curl -s https://raw.githubusercontent.com/alien4cloud/alien4cloud.github.io/sources/files/1.4.0/getting_started.sh | bash Yes, we do it all for you! So what’s going on in this script ? Install : We will create a directory named alien4cloud-getstarted in which we will fetch alien4cloud opensource version, a minimal TOSCA orchestrator called puccini, the plugin to let alien4cloud work with puccini and we’re going to configure it all for you! Prepare : Pull some docker images required for puccini and that we know working with our samples (Docker images tends to be minimal and some of them don’t even have a bash installed or sudo command). Start : Well that’s kind of an easy step, we just launch alien4cloud in the background for you. Post-start configure : When launched we configure an orchestrator and it’s location for you so you can perform docker deployments. That’s just a few curl requests on the a4c rest API! Launch your browser : If possible, so you really don’t have anything to do! Except docker images we don’t store anything outside of the alien4cloud-getstarted directory. If you like to remove alien4cloud getting started components just remove this directory. Prerequisites Install VirtualBox Install Vagrant Download and start the box Download the getting started Vagrantfile Put the Vagrantfile in a folder, all Vagrant meta data will be written there. Go to the created folder last step. Execute ‘vagrant up’ (Note that the first launch may take some time as the box size is 3Gb) Next time, when you bring up the machine, you should execute ‘vagrant up –provision’ instead of ‘vagrant up’ or else Alien won’t start. This is a limitation for the moment, Alien’s web app should have been packaged as service. The URL of alien4cloud will be available at http://192.168.33.10:8088/ Let’s play! Login into the application using the default user: user: admin password: admin The admin user is granted will all rights on the platform. This getting started will perform all operations using the admin user. Of course if you want to setup an Alien4Cloud for production usage and multi-users and role management you should probably refer to advanced configurations and installation of Alien4Cloud as well as user guide for user management . Import components in Alien4Cloud The Wordpress topology is using custom types, we have to upload them first. Find those types on github: https://github.com/alien4cloud/samples apache : the webserver php : the php interperter mysql : the database required by Wordpress wordpress : the blog component topology-wordpress : the topology composed by previous components The quickest way to import all of these archives is the Git integration feature in Alien4Cloud. Click on button in the navigation bar. Then click side bar sub-menu . Now add a new Git location: . Fill the modal like the example below with the followings: Repository Url : https://github.com/alien4cloud/samples.git Branch or Tag / Archive’s folder 1.4.0 / apache 1.4.0 / php 1.4.0 / mysql 1.4.0 / wordpress 1.4.0 / topology-wordpress Now, click on to pull all components from git and upload them into the Alien4Cloud catalog. Wait for the import success bar to show and continue: Some warnings will be thrown if you specify an other branch or tag. We release some version of Samples, according to our implementation of TOSCA. If you take the wrong Sample, the required normative types can be missing. Find detailed informations about the Wordpress topology in the devops guide . Create a Wordpress application Now we have the Wordpress template ready to use, we can create an application based on it. To do this, go to section. Click on . In the new application modal fill the name with ‘wordpress’, click on Topology Template button and select the wordpress-topology in the topology template list. Click on create to create the application and be redirected to the application informations page. To see your application topology, click on . This will take you to the topology editor. As you can see the topology is already complete. We will cover topology edition later on so, for now, let’s prepare for deployment. Setup and deploy your application Click on to configure your deployment: Inputs are already configured with default values so we automatically skip this step to let you select the location. The installation step configured a local docker location so click on it to select it as a target location: Node matching step is here again done automatically for you. During this step Alien4cloud found a valid match for your compute nodes, as there is just one template defined on the orchestrator to provide a ubuntu container that’s the one that has been picked up. While it is good enough for a deployment we will setup some docker related advanced settings that are not included in the portable topology (as they are quite specific on this local docker deployment). What we want to configure is the wordpress container port mapping. Note that this is not done automatically yet by puccini but could be in future implementations. So let’s go on node matching tab and select the wordpress compute node called ‘computeWww’ in our topology , then click on the current selected match for the node (well and only choice on this location as currently defined). Let’s first expose the port out of the container by changing the exposed_ports property: Click on the edition button for the exposed_ports property . In the modal click on the button to add a new port exposition. Click on the 0 element edit button and set 80 to the port and tcp to the protocol and close the modal. Let’s now configure the port_mappings using the same procedure to configure a port mapping from 80 to, for example 9099 Ok now that the port from the docker container is exposed to the outside world we can deploy our application! So just go to the deploy tab and click on the button! Note that this may take a few minutes as we are going to download and install the various components of the topology. More on matching and application configuration If you have not done it yet, you can get more informations on the application concepts in alien4cloud as well as deployment configuration and matching concepts here. You can also read more on the alien user guide’s application management section . Check that your application is up and running On the runtime view, you can have the detailed deployment progress. Click on the side bar sub-menu , When all nodes are deployed, just open the wordpress url in your browser. Note that as we defined a specific port mapping making the inner docker port available on our host 9099 port we have to change it accordingly: http://127.0.0.1:9099. Shut down alien4cloud You can still play with alien4cloud of course as there is plenty to discover ;). But when you want to shut it down just launch the following command: pkill -f 'alien4cloud-ui-*' In order to launch it again no need to launch the curl and download it again of course. So just go to the alien4cloud-getstarted/alien4cloud folder and launch the alien4cloud.sh script. cd alien4cloud-getstarted/alien4cloud # note: > /dev/null 2>&1 & is just to launch it in background so just remove that if you wish to launch alien in front. ./alien4cloud.sh > /dev/null 2> & 1 & Done!!! That is it! You should now be a little bit familiar with Alien4cloud interface. But there is more under the skin! Now you can go further into the getting started to see more cool stuffs, like how to deploy on Amazon cloud. (Example of a cloudify manager) You can see more details about Puccini’s supported locations and resources here . For more, take a look on the user guide . If you want to understand a bit more about the concepts developed in Alien4cloud, here you go , and you might want to check out for the TOSCA usage guide too. "},{"title":"Create TOSCA archive with alien4cloud","baseurl":"","url":"/documentation/1.4.0/devops_guide/custom_types/new_types_with_alien.html","date":null,"categories":[],"body":"While alien4cloud has a TOSCA topology editor it does not ship yet a TOSCA type editor. Therefore you will have to write TOSCA types in an external editor. In order to perform validation of your types when writing it you should first have an alien4cloud server (either local or have a remote one available). Then you can use the following script that will package a given directory in a zip file, upload it into alien and display the validation result (out of the API response json). As long as your archive is a SNAPSHOT archive you can re-upload it as much as you like. You can then use it in a topology and launch it to perform the actual deployment on the cloud you like! "},{"title":"Node filter","baseurl":"","url":"/documentation/1.4.0/devops_guide/tosca_grammar/node_filter.html","date":null,"categories":[],"body":"A node filter definition defines criteria for selection of a TOSCA Node Template based upon the template’s property values, capabilities and capability properties. Keynames Keyname Required Type Description tosca_definitions_version properties no map of property filter definition An optional sequenced list of property filters that would be used to select (filter) matching TOSCA entities (e.g., Node Template, Node Type, Capability Types, etc.) based upon their property definitions’ values. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 capabilities no map of string (1) or map of capability filter definition An optional sequenced list of capability names or types that would be used to select (filter) matching TOSCA entities based upon their existence. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 (1) Alien 4 cloud does not support the map of string notation in current implementation. Grammar node_filter : properties : - <property_filter_def_1> - ... - <property_filter_def_n> capabilities : - <capability_name_or_type_1> : properties : - <cap_1_property_filter_def_1> - ... - <cap_m_property_filter_def_n> - ... - <capability_name_or_type_n> : properties : - <cap_1_property_filter_def_1> - ... - <cap_m_property_filter_def_n> Example my_node_template : # other details omitted for brevity requirements : - host : node_filter : capabilities : # My “host” Compute node needs these properties: - host : properties : - num_cpus : { in_range : [ 1 , 4 ] } - mem_size : { greater_or_equal : 512 MB } "},{"title":"Node template","baseurl":"","url":"/documentation/1.4.0/devops_guide/tosca_grammar/node_template.html","date":null,"categories":[],"body":"A Node Template specifies the occurrence of a manageable software component as part of an application’s topology model which is defined in a TOSCA Service Template. A Node template is an instance of a specified Node Type and can provide customized properties, constraints or operations which override the defaults provided by its Node Type and its implementations. If a node template name contains some special character (is: not an alphanumeric character from the basic Latin alphabet and the underscore) we will replace this caractere by an underscore. Keynames The following is the list of recognized keynames recognized for a TOSCA Node Template definition and parsed by Alien4Cloud: Keyname Required Description type yes The required name of the Node Type the Node Template is based upon. requirements no An optional sequenced list of requirement definitions for the Node Template. properties no An optional list of property values for the node template. capabilities no An optional map of capabilities for the node template. interfaces no An optional list of named interface definitions that override those coming from type. Grammar The overall structure of a TOSCA Node Template and its top-level key collations using the TOSCA Simple Profile is shown below: <node_template_name> : type : <node_type_name> properties : <property_definitions> requirements : <requirement_definitions> capabilities : <capability_definitions> interfaces : <interface_definitions> type Represents the name of the Node Type the Node Template is based upon. This Node Type must be found in the archive itself or in the declared dependencies of the service template. requirements To define relationships between node templates, you can describe the requirements that points to targets’ capability. Named requirement definitions have one of the following grammars: short notation (node only) <requirement_name> : <template_name> When using this short notation: the must match to the name of the requirement in the type definition. the points to another node template in the topology template (relationship target). the type of the node template target must have a capability named like the requirement. Here is an example : topology_template : node_templates : compute : type : tosca.nodes.Compute apache : type : alien.nodes.Apache requirements : # the alien.nodes.Apache type defines a requirement named 'host' # the tosca.nodes.Compute type defines a capability named 'host' - host : compute short notation (with relationship or capability) In some situations, the short notation is not enough, for example when the capability name doesn’t match the requirement name (in this case, you must specify the capability type), or when you want to define relationship properties values. The following grammar would be used if either a relationship or capability is needed to describe the requirement: <requirement_name> : node : <template_name> capability : <capability_type> relationship : <relationship_type> In such notation the keywords are: Keyname Required Description node yes The relationship target node template name. capability yes The type of the target node type capability that should be used to build the relationship. relationship no Optionally, the name of the relationship type that should be used to build the relationship (if not defined in the requirement definition or must be specified). properties no An optional list of property values for the relationship (non TOSCA). interfaces no An optional list of named interface definitions that override those coming from relationship type. In the following example, the relationship type is found in the requirement ‘database’ of the type alien.nodes.Wordpress. The capability is found by the specified type ‘alien.capabilities.MysqlDatabase’ : node_templates : wordpress : type : alien.nodes.Wordpress requirements : - host : apache - database : node : mysql capability : alien.capabilities.MysqlDatabase - php : node : php capability : alien.capabilities.PHPModule In the following example, the relationship is specified: node_templates : compute : type : tosca.nodes.Compute requirements : - network : node : network capability : tosca.capabilities.Connectivity relationship : tosca.relationships.Network network : type : tosca.nodes.Network properties The property values can either be: a scalar value a function: a reference to an input In the following example, 2 properties are defined for the node ‘compute1’ (1 referencing an input, and the other defined using a scalar value): topology_template : inputs : os_type : type : string constraints : - valid_values : [ \"linux\" , \"aix\" , \"mac os\" , \"windows\" ] description : The host Operating System (OS) type. node_templates : compute1 : type : tosca.nodes.Compute properties : os_type : { get_input : os_type } mem_size : 1024 capabilities In the following example, we define the value of the property ‘port’ for the capability named ‘database_endpoint’ of the node ‘mysql_database’: topology_template : node_templates : mysql_database : type : tosca.nodes.Database capabilities : database_endpoint : properties : port : 3306 Note that the property value can also be a get_input function: topology_template : inputs : mysql_port : type : string node_templates : mysql_database : type : tosca.nodes.Database capabilities : database_endpoint : properties : port : { get_input : mysql_port } interfaces You are allowed to: override an interface defined in the type and override a given operation. override an interface defined in the type by adding a new operation. add a new interface to the node template. "},{"title":"Node type","baseurl":"","url":"/documentation/1.4.0/devops_guide/tosca_grammar/node_type.html","date":null,"categories":[],"body":"A Node Type is a reusable entity that defines the type of one or more Node Templates. As such, a Node Type defines the structure of observable properties via a Properties Definition, the Requirements and Capabilities of the node as well as its supported interfaces. Keynames Keyname Required Type Description tosca_definitions_version derived_from no string An optional parent Node Type name the Node Type derives from. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 version (1) no version An optional version for the Entity Type definition. N.A. metadata (2) no map of string Defines a section used to declare additional metadata information. alien_dsl_1_3_0 tosca_simple_yaml_1_0 tags (2) no map of string Defines a section used to declare additional metadata information. alien_dsl_1_3_0 alien_dsl_1_2_0 description no string An optional description for the Node Type. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 abstract (3) no boolean Optional flag to specify if a component is abstract and has no implementation. Defaults to false. alien_dsl_1_3_0 alien_dsl_1_2_0 attributes no map of attribute definitions An optional list of attribute definitions for the Node Type. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 properties no map of property definitions An optional list of property definitions for the Node Type. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 requirements no list of requirement definitions An optional sequenced list of requirement definitions for the Node Type. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 capabilities no map of capability definitions An optional list of capability definitions for the Node Type. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 interfaces no map of interface definitions An optional list of named interfaces for the Node Type. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 artifacts no map of artifact definitions An optional sequenced list of named artifact definitions for the Node Type. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 (1) version at type level is defined in TOSCA but they are optional and there is no example on how it should be managed. We believe in alien4cloud that versions should be managed at the service template/archive level and dispatched to every elements defined in the service template/archive. (2) metadata appeared in TOSCA while alien4cloud already had tags supported, support for metadata keyword has been added in 1.3.1 version. note that if you specify both metadata and tags one may silently override the other (this should be avoided). (3) Abstract flag is specific to Alien 4 Cloud and is not part of TOSCA Simple Profile in YAML. In TOSCA nodes are considered as abstract as long as the create method of the node is not implemented. Requirement definitions name Requirement definitions on a node type is specified as a list because the ordering should define the order in which the relationships will be managed when building the workflow out of the declarative model. However TOSCA does not authorize duplication of the name of a requirement definition that must be unique for a given node type. Grammar <node_type_name> : derived_from : <parent_node_type_name> description : <node_type_description> properties : <property_definitions> attributes : <attribute_definitions> requirements : - <requirement_definition_1> ... - <requirement_definition_n> capabilities : <capability_definitions> interfaces : <interface_definitions> artifacts : <artifact_definitions> See: property_definitions attribute definitions requirement definitions capability definitions artifact definitions interface definitions Example my_company.my_types.my_app_node_type : derived_from : tosca.nodes.SoftwareComponent description : My company’s custom applicaton properties : my_app_password : type : string description : application password constraints : - min_length : 6 - max_length : 10 my_app_port : type : number description : application port number requirements : - host : tosca.nodes.Compute interfaces : [ Standard ] "},{"title":"Operation definition","baseurl":"","url":"/documentation/1.4.0/devops_guide/tosca_grammar/operation_definition.html","date":null,"categories":[],"body":"An operation definition defines a named function or procedure that can be bound to an implementation artifact (e.g., a script). Keynames Keyname Required Type Description tosca_definitions_version description no string The optional description string for the associated named operation. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 implementation no string or implementation artifact definition (1) (2) The optional implementation artifact name (example: a script file name within a TOSCA CSAR file). alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 inputs (3) no list of property definitions or list of property assignment The optional list of input parameter definitions. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 (1) TOSCA does not support the implementation artifact definition syntax in 1.0.0 and 1.1.0 versions. This is however not really correct as using artifact extension may not be enough to auto-detect the type of artifact and therefore the way to execute it. (2) TOSCA allow the implementation keyword to have two children being a primary and dependencies artifacts. We don’t support this notation in alien4cloud and also don’t support yet the ability to specify primary and dependencies artifact. Read how you can get artifacts in alien4cloud . (3) defined inputs are injected on runtime into as environment variable for the implementation script. In addition of these defined inputs, some properties related to the nodes are also automatically availlable. See Environment variables section . The artifact must be related to an artifact type. The way artifacts are related to artifact types is based on the implementation artifact name extension. This refers directly to the artifact types file_ext property that may have been defined. If no artifact type matches the extension Alien 4 Cloud will not allow parsing of the artifact. TOSCA supports the definition of primary and dependencies artifacts. This is not yet supported in alien4cloud. Note that primary and dependencies syntax is not in line with the definition of implementation artifacts and not really valid for the reasons explained here. Grammar <operation_name> : description : <operation_description> implementation : <implementation_artifact_name> inputs : <parameter_definition> In addition, the following simplified grammar may also be used (where a full definition is not necessary): <operation_name> : <implementation_artifact_name> implementation_artifact_name must be the path to a file and is resolved starting from the archive root. Example The following example shows how to define a node type with operation: node_types : fastconnect.nodes.OperationSample : interfaces : Standard : create : /scripts/install.sh configure : description : This is the configuration description. implementation : /scripts/setup.sh inputs : value_input : 4 function_input : { get_property : [ my_host , mem_size ] } custom : do_something : implementation : /scripts/do_something.sh inputs : property_input : type : string description : An input that will have to be provided on operation call. constraints : - min_length : 4 - max_length : 8 "},{"title":"Orchestrator(s) and location(s) management","baseurl":"","url":"/documentation/1.4.0/user_guide/orchestrator_location_management.html","date":null,"categories":[],"body":" To understand the orchestrator and location concepts, please refer to this section . Requirements Alien 4 cloud is not responsible for actual deployment orchestration but rather interact with existing orchestration technologies. In order to define an orchestrator and a location, you must configure plugins that will be used to actually perform deployment(s) on the defined location using the created orchestrator. In order to configure a set of Orchestrator/locations, you must have installed an orchestrator plugin first see plugin management . Supported orchestrators We are currently supporting the opensource orchestrator cloudify 3.4.0. Orchestrators management Orchestrator creation Once you have installed a plugin, the admin can go on the orchestrator page and configure one. Remember that you can use the Alien 4 Cloud contextual help in order to be guided directly within the application. To create an orchestrator, just go in the orchestrator list page and click on the New orchestrator button. Orchestrator configuration To configure an orchestrator, select it in the orchestrator list page and go to configuration side menu. Naming policy On every orchestrator, you can configure a naming policy that Alien 4 Cloud will use when deploying an application. The naming policy will be used to identify the deployment on the cloud’s orchestrator. Most of the orchestrators will leverage this naming policy to name the resources used at the IaaS level also. To compose your own application naming policy, you can use the following entities and properties : environment : the environment linked to the deployment id name description environmentType : OTHER, DEVELOPMENT, INTEGRATION_TESTS, USER_ACCEPTANCE_TESTS, PRE_PRODUCTION, PRODUCTION application : deployed application id name creationDate lastUpdateDate metaProperties [‘PROPERTY_NAME’] : meta-properties defined on the application time : current date at format yyyyMMddHHmm The default naming policy setting for any orchestrator is : environment.name + application.name Deployment name unicity The deployment name must be unique at a given time, the orchestrator administrator is responsible for choosing a pattern that should be unique or some application(s) may not be deployed (if a deployment with the same name is already running). Note that we guaranty that an application’s name is unique across all applications and that an environment name is unique for a given application. However, when generating the application paaSId (final application name on the PaaS), all space character will be replaced by an _ . Therefore and as an example, if your naming policy involves the application name, you can not deploy simultaneously two applications named “ Test App ” and “ Test_App ” with the same orchestrator, as the generated paaSId will be in conflict. The main pattern to define a naming policy is to use + to concat different properties or text, for examples : environment.name + application.name + time application.id + environment.environmentType + '-US_ZONE' time + '__' + application.creationDate 'MY_APP' + '-WORDPRESS-' + time metaProperties['PROPERTY_NAME'] + '-' + time Empty meta properties Any empty property used in the naming policy expression will cause a deployment failure. Advanced use : the policy expression is based on SpEL ( Spring Expression Language ) and you could use its capabilities if you are familiar with it. Note : do not use the # Driver configuration Most orchestrator plugins will require specific configuration in order to communicate with the actual orchestrator instance or to configure it’s behavior. As stated, this configuration being specific to the orchestrator, you should refer to the orchestrator specific guide. For example, the cloudify3 provider defines connexion parameters so that Alien 4 Cloud can communicate with the orchestrator engine server (cloudify 3 manager). More informations for cloudify 3 can be found on it specific documentation. Updating configuration Usually, the configuration of an orchestrator is done before enabling it. The configuration is automatically saved in that case. However, it might occur that you want to update it after having the orchestrator enabled. Therefore, you need to unlock the configuration first, by hitting the unlock button on the top right corner of the configuration screen. Do not forget to hit the save button on the bottom of the screen, to save the configuration, and it will be loaded immediately. Enabling the orchestrator Once properly configured, you should enable the orchestrator, by hitting the enable button on the information screen of the selected orchestration. Locations management After Configuring the orchestrator, you have to create one or more location, depending on whether your orchestrator allows it. Note that you cannot access the location management steps on a disabled orchestrator. Location creation To create a location, first go to the location the orchestrator list page, by clicking on the side menu represented by a cloud, and then click on the New location button, and fill in the form. Location configuration Once created you must configure the location. It requires several steps: Configure cloud resources used for resources matching at deployment time. Configure the meta properties of the orchestrator (that depends of the chosen one). Configure the security access to the location Configure location resources Configuration resources tab In this step you need to configure the resources types exposed by location. These resources will help to configure or generate those which will be used in matching on deployment configuration. For example, the Cloudify 3 provider exposes configuration resources such as Images, Favors, and Availability zone. On demand resources tab On demand resources are the exact resources used for matching nodes within the topology before deploying, such as Computes, networks and volumes. They may a combination of one or more configuration resources. (Example, the on demand resource Compute could be a combination of Image and Flavor configuration resources.). The following is an example based on the Cloudify 3 provider: To add more resource templates, you can simply drag and drop resources types from the panel on the right. Auto configuration If the location exposes a way to automatically generate on-demand resources, you can hit the “auto-config” button to auto-generate them. Adding custom on-demand resources from the TOSCA catalog You can also create Custom on demand resource templates using any type from the catalog - provided that it is not abstract. This allows you to match abstract nodes in a topology to concrete custom resources, defined within your orchestrator location, exactly the same way you are used to with on-demand resources provided by the orchestrator. To do so, simply drag and drop resource types from the catalog panel. At this point, we assume that you know what you are doing when creating custom resources. If not, please make sure you go through our documentation on this feature . Alien has no way to verify if custom resource templates created in the location are compatible with your orchestrator. Meta properties This feature allows you to define meta-properties on your location and then use them in your topology as an internal variable defined by your administrator. Obviously as a CLOUD_DELOYER , APPLICATION_USER or APPLICATION_MANAGER you won’t be able to change this value. At this stage, we assume you know how to create meta-properties targeting location, application or environment. In the meta-properties tab, you should be able to set a value for any location targeted meta-property. Fill the desired values in order to use it later as in get_input for a property. Regarding your meta-property definition, you can add constraint on it. Thus, you must see constraint violation error if any in this location meta-properties form. Security To manage the authorizations on location, refear to this page . "},{"title":"Orchestrators","baseurl":"","url":"/documentation/1.4.0/orchestrators/orchestrators.html","date":null,"categories":[],"body":"This section details configuration of supported orchestrators in alien 4 cloud. We currently officially support Cloudify 3 , as well as Cloudify 4 . and are working with Cloudsoft on the support of Apache Brooklyn. In 1.4.0, we added experimental support of Marathon , a meta-framework for Mesos which enables scheduling and orchestration of Docker containers . If you are looking for information on how to contribute or integrate other orchestrators or deployment technologies please check out our contribute section. "},{"title":"Orchestrators and locations","baseurl":"","url":"/documentation/1.4.0/concepts/orchestrators_locations.html","date":null,"categories":[],"body":"Orchestrator and locations are the fundamental concepts that allows alien 4 cloud to bring deployment’s portability across various technologies (orchestrators) and targets (locations). Orchestrators An orchestrator is a deployment technology that will orchestrate (and eventually maintain) the deployment and undeployment of a topology. There is various orchestrators on the market and Alien 4 Cloud can easily integrate with them through a plugin mechanism. Every deployment in alien 4 cloud is done through an orchestrator on a Location configured for and managed by an orchestrator . Locations In Alien 4 Cloud every deployment is done on what we call a Location . Locations are used to describe a logical deployment target that offers a set of resources (e.g. machines, storage, network, firewalls etc.). A location may refer to a cloud , to a specific tenant on a cloud , to a set of physical machines (BYON), or even docker containers. For example a location can refer to an Amazon configuration using a specific account while another location could be configured to work on a specific Tenant on an OpenStack cloud. To make it simple a location is a logical set of available resources that Alien 4 Cloud can use to deploy applications. In order to do so A4C relies on orchestration technologies that are easily pluggable. You can create and configure as many locations as you like to have as many deployment targets (environments) as required. Location’s supported resources As we stated a location may refer to some very different targets from clouds to containers or physical machines. This means that you may have differences in the resources supported by theses locations. More the configuration of this resources may not be the same from one location to another. Alien 4 Cloud has been designed to allow portability of a given application or to be precise to a given topology to deploy. However we never wanted to limit the user because of portability. The location’s defined resources are there exactly for this purpose. Every location now exposes their own and specific definitions of the generic components (or nodes) that lies into TOSCA. They can also expose some components that are not at all in relation with the normative nodes so people can choose to create non-portable topologies if they feel that they will get more benefits from it that portability. We really wanted in 1.1.0 to provide user with flexibility and ability to benefits from the best of orchestrator technologies. Here a list of supported resources with Cloudify 3: Infrastructure type OS type Supported artifact OpenStack linux .sh ( tosca.artifacts.ShellScript ) windows .bat ( alien.artifacts.BatchScript ) AWS linux .sh ( tosca.artifacts.ShellScript ) windows .bat ( alien.artifacts.BatchScript ) Some Alien users deployed also Puppet artifact through Groovy script. Note We are currently supporting the opensource orchestrators cloudify 3 but are also working in collaboration with Apache Brooklyn on an orchestration plugin currently in incubation state. Role and security Only a user with the global ADMIN role can define, configure, enable and grant deployment role to other users or groups on a locations. To find more on locations and how to configure them in Alien 4 Cloud please look at the Getting started guide if you don’t already have an Alien instance running and at the cloud setup guide in order to learn cloud configuration. "},{"title":"Rest API","baseurl":"","url":"/documentation/1.4.0/rest/overview.html","date":null,"categories":[],"body":"ALIEN 4 Cloud API Overview This section contains documentation of Alien 4 Cloud REST API. Version information Version: 1 "},{"title":"admin-api","baseurl":"","url":"/documentation/1.4.0/rest/overview_admin-api.html","date":null,"categories":[],"body":"ALIEN 4 Cloud API Overview This section contains documentation of Alien 4 Cloud REST API. Version information Version: 1 "},{"title":"admin-audit-api","baseurl":"","url":"/documentation/1.4.0/rest/overview_admin-audit-api.html","date":null,"categories":[],"body":"ALIEN 4 Cloud API Overview This section contains documentation of Alien 4 Cloud REST API. Version information Version: 1 "},{"title":"admin-metaproperties-api","baseurl":"","url":"/documentation/1.4.0/rest/overview_admin-metaproperties-api.html","date":null,"categories":[],"body":"ALIEN 4 Cloud API Overview This section contains documentation of Alien 4 Cloud REST API. Version information Version: 1 "},{"title":"admin-orchestrator-api","baseurl":"","url":"/documentation/1.4.0/rest/overview_admin-orchestrator-api.html","date":null,"categories":[],"body":"ALIEN 4 Cloud API Overview This section contains documentation of Alien 4 Cloud REST API. Version information Version: 1 "},{"title":"admin-plugin-api","baseurl":"","url":"/documentation/1.4.0/rest/overview_admin-plugin-api.html","date":null,"categories":[],"body":"ALIEN 4 Cloud API Overview This section contains documentation of Alien 4 Cloud REST API. Version information Version: 1 "},{"title":"admin-user-api","baseurl":"","url":"/documentation/1.4.0/rest/overview_admin-user-api.html","date":null,"categories":[],"body":"ALIEN 4 Cloud API Overview This section contains documentation of Alien 4 Cloud REST API. Version information Version: 1 "},{"title":"applications-api","baseurl":"","url":"/documentation/1.4.0/rest/overview_applications-api.html","date":null,"categories":[],"body":"ALIEN 4 Cloud API Overview This section contains documentation of Alien 4 Cloud REST API. Version information Version: 1 "},{"title":"applications-deployment-api","baseurl":"","url":"/documentation/1.4.0/rest/overview_applications-deployment-api.html","date":null,"categories":[],"body":"ALIEN 4 Cloud API Overview This section contains documentation of Alien 4 Cloud REST API. Version information Version: 1 "},{"title":"catalog-api","baseurl":"","url":"/documentation/1.4.0/rest/overview_catalog-api.html","date":null,"categories":[],"body":"ALIEN 4 Cloud API Overview This section contains documentation of Alien 4 Cloud REST API. Version information Version: 1 "},{"title":"other-apis","baseurl":"","url":"/documentation/1.4.0/rest/overview_other-apis.html","date":null,"categories":[],"body":"ALIEN 4 Cloud API Overview This section contains documentation of Alien 4 Cloud REST API. Version information Version: 1 "},{"title":"topology-editor-api","baseurl":"","url":"/documentation/1.4.0/rest/overview_topology-editor-api.html","date":null,"categories":[],"body":"ALIEN 4 Cloud API Overview This section contains documentation of Alien 4 Cloud REST API. Version information Version: 1 "},{"title":"workspaces-api","baseurl":"","url":"/documentation/1.4.0/rest/overview_workspaces-api.html","date":null,"categories":[],"body":"ALIEN 4 Cloud API Overview This section contains documentation of Alien 4 Cloud REST API. Version information Version: 1 "},{"title":"Property assignment","baseurl":"","url":"/documentation/1.4.0/devops_guide/tosca_grammar/parameter_definition.html","date":null,"categories":[],"body":"A parameter definition is map used to declare a name for a parameter along with its value to be used as inputs for operations. This value can either be a fixed value or one that is evaluated from a function or expression. Alien 4 Cloud allow users to specify also a property definition as the parameter value. This is possible only for operations that are not part of the automatic lifecycle but that will be triggered upon user request. Grammar <parameter_name> : <value> | <function_definition> See function_definition . For Alien 4 Cloud property definition syntax support you can refer to the property_definition page . Example node_types : fastconnect.nodes.PropertiesSample : interfaces : custom : do_something : inputs : value_input : 4 function_input : { get_property : [ my_host , mem_size ] } property_input : type : string description : An input that will have to be provided on operation call. constraints : - min_length : 4 - max_length : 8 "},{"title":"Plugin(s) management","baseurl":"","url":"/documentation/1.4.0/user_guide/plugin_management.html","date":null,"categories":[],"body":" Plugins allows to provide some additional functionalities to Alien 4 Cloud. Users can create different type of plugins (or plugins with multiple features): Orchestrator plugins allows to provide additional orchestrators support to alien 4 cloud to use some other technologies to deploy TOSCA or Specific topologies. Location matching plugins allows to override the basic location matching logic provided within Alien 4 Cloud. Only a single Location Matching plugin can be defined in Alien 4 Cloud currently. If more than one location matching plugin is enabled in Alien then one will be picked up randomly. Node matching plugins allows to override the basic TOSCA node matching logic within Alien 4 Cloud. Generic extension plugins can provide additional UI screens and REST Services allowing any kind of extension to alien 4 cloud and even to override some of the alien UI components (it is not possible to override native rest services). If you want to create new plugins for Alien 4 Cloud please refer to the developer guide . Installing plugin in Alien 4 Cloud Drag you archive file > Drop it on the dash dotted area Click on [Upload plugin] > Select your archive (The file is automatically uploaded) After installing, removing, disabling or enabling a plugin that provides UI components user must refresh it’s browser page in order to reload plugin’s javascript code that may have changed. This is especially true when removing or disabling a plugin as the rest services used by the plugin’s UI won’t be available anymore eventually causing unexpected 500 errors. Plugin configuration Some plugins may requires specific configuration that is global to the plugin. In case a plugin can be configured you will see the following icon : Advanced plugins configurations The configuration detailed in the previous section is global for the plugin. Some plugins may requires some specific configurations that you can find at other places in the application. Your should refer to the plugin specific documentation to know more about it. For example, PaaS providers plugins actually are able to manage multiple instances of orchestrators, the specific configuration for each instance is managed at the cloud level. Plugin update Due to historical management of orchestrator plugins alien4cloud allowed, before 1.3.1, multiple versions of the same plugin to be concurrently loaded and enabled. Starting from version 1.3.1 this behavior is not allowed anymore and a single version of a given plugin can exists at a given point of time. This avoid a lot of potential conflicts especially on the UI side. In previous version of alien4cloud a migration tool was provided to ensure plugin version update, starting from 1.3.1 plugins will be automatically updated when the alien4cloud server restarts based on plugins existing in the initialization folder. Update process: - Stop your alien4cloud server - Remove old plugin(s) archive(s) from the /init/plugins folder and add new one(s) - Restart alien4cloud Alien 4 cloud on startup will automatically update the plugins versions inside alien4cloud and load the new plugin version. Model update This auto-update does not perform any model update. If your plugin model has changed you should either provide a migration tool to update data or have a built-in migration process upon plugin initialization. Hot update We don’t support hot-plugin updates currently. This is a choice we made as unloading a plugin may cause interruption of some active processing from the plugin (including ongoing deployment/un-deployment). This behavior will however be improved in next versions and plugins will be responsible of their shutdown management before a plugin is disabled. "},{"title":"Policy","baseurl":"","url":"/documentation/1.4.0/devops_guide/tosca_grammar/policy.html","date":null,"categories":[],"body":"A policy applies on a group of nodes in a topology template. Only one policy type is currently supported in A4C : tosca.policy.ha This policy is not described by TOSCA (policies are actually a WIP). We have defined this one to support high availability features. Keynames The following is the list of recognized keynames recognized for a TOSCA Policy definition and parsed by Alien4Cloud: Keyname Required Description name yes The required name of the policy. type yes The type of the policy. Several notation are availables to express policy. Standard Grammar name : <policy_name> type : <policy_type_name> Example node_templates : server1 : type : tosca.nodes.Compute server2 : type : tosca.nodes.Compute groups : server_group_1 : members : [ server1 , server2 ] policies : - name : my_scaling_ha_policy type : tosca.policy.ha Shortcut Grammar <policy_name> : <policy_type_name> Example node_templates : server1 : type : tosca.nodes.Compute server2 : type : tosca.nodes.Compute groups : server_group_1 : members : [ server1 , server2 ] policies : - my_scaling_ha_policy : tosca.policy.ha TOSCA Samples Grammar This grammar has been used in TOSCA simple profile example. We support it for compatibility but don’t recommend it. <policy_name> : type : <policy_type_name> Example node_templates : server1 : type : tosca.nodes.Compute server2 : type : tosca.nodes.Compute groups : server_group_1 : members : [ server1 , server2 ] policies : - my_scaling_ha_policy : type : tosca.policy.ha "},{"title":"Ports requirements","baseurl":"","url":"/documentation/1.4.0/admin_guide/ports_requirements.html","date":null,"categories":[],"body":"This section describes all the necessary ports for Alien4Cloud to work. Network traffic must be unrestricted on all of them for the involved servers. Note : Cloudify ports are only written here as an indication. If you have any doubt about Cloudify required ports, or are using an unmentioned version of Cloudify, please check the cloudify documentation . Component - Port description Default port number/range Component Version Alien4Cloud - standalone GUI port 8088 1.0.0, 1.1.0, 1.4.0 Alien Post-Deployment application 8089 1.4.0 Cloudify - Management server ports 8099,8100,22,443,80 3.2, 3.3 Cloudify - Management server ports for Agent/manager communication 5672,8101,53229 3.2, 3.3 "},{"title":"Alien Post-Deployment application","baseurl":"","url":"/documentation/1.4.0/admin_guide/post_deployment_application.html","date":null,"categories":[],"body":" Premium feature This section refers to a premium feature. The Alien post-deployment web application is a Spring boot application, that helps managing patches or operations added to a node within a deployment. You MUST deploy it if you plan on providing to the users the ability to perform post deployment operations on an application. Where to deploy You can deploy the post deployment application where ever suits you, but note that it should be easily accessible from Alien4cloud. For example, for the users of the cloudify 3 orchestrator plugin, it is possible to deploy it on your manager instance. Just make sure to open the configured ports. Download The application is owned by Alien4Cloud premium dist. Here is the link to download it Alien 4 Cloud Post Deployment Application Installation The application already contains a basic configuration that is good enough for test environment. However in order to move into production, you should customize the configuration. If you are working with the packaged zip, then you do not need to create the following files (and the start script), a s they are already in the package. However if working directly with the war package, then this might be useful. Along-side to the application war, you should place configuration files in a folder named config : ├── alien4cloud-postdeployment-rest- { version } .war ├── config/alien4cloud-post-deployment-config.yml ├── config/elasticsearch.yml Here you can find a sample configuration for: alien4cloud-post-deployment-config.yml elasticsearch.yml Feel free to customize the values of the different configurations. However the main elements you might wish to modify are the port and the alien_post_deployment for alien4cloud-post-deployment-config.yml elastic search storage directories for elasticsearch.yml path : data : ${user.home}/.alienpostdeployment/elasticsearch/data work : ${user.home}/.alienpostdeployment/elasticsearch/work logs : ${user.home}/.alienpostdeployment/elasticsearch/logs start script You can also add a simple start script: ├── start.sh ├── alien4cloud-postdeployment-rest- { version } .war ├── config/alien4cloud-post-deployment-config.yml ├── config/elasticsearch.ymll with the following content: cd ` dirname $0 ` JAVA_OPTIONS = \"-server -showversion -XX:+AggressiveOpts -Xmx1g -Xms1g -XX:MaxPermSize=256m -XX:+HeapDumpOnOutOfMemoryError\" java $JAVA_OPTIONS \\ -cp config/:alien4cloud-postdeployment-rest- { version } .war \\ org.springframework.boot.loader.WarLauncher \"$@\" Deploying Just run the script alien4cloud-post-deployment.sh (or the previously created start.sh if not working with the zip package). Go to the url http://<deployed_machine_ip>:<server_port>/rest/postdeployment/test , you should have the response: Running Advanced configuration Using ssl See security section . ElasticSearch See Elastic Search configuration section. "},{"title":"Post-Deployment configuration","baseurl":"","url":"/documentation/1.4.0/orchestrators/cloudify3_driver/postdeployment_config.html","date":null,"categories":[],"body":" Premium feature This section refers to a premium feature. If you are working with the premium cloudify 3 plugin, then you have a feature that allows you to add and execute patches and custom operations on a deployed application. It needs to have a patches server deploy somewhere , and to makes some configurations. Post-Deployment server URL You must provide the endpoint URL of your deployed patches server: postDeploymentRestURL . SSL configurations If your server is deployed with SSL security, and required the clients to authenticate themselves before him, you need to provide alien4cloud with a bunch of security informations when configuring the orchestrator: CA certificate : The authority certificate to refers to trust the patches server; Client key : The key to use to authenticate the client before the patches server; Client certificate : The certificate to use to authenticate the client before the patches server. You must provide them in form of string data values (means you have to open your .pem file with a text editor and copy the content). Make sure that you are in multi-line edition mode, by clicking on the icon next the edition box. "},{"title":"Post-Deployment configuration","baseurl":"","url":"/documentation/1.4.0/orchestrators/cloudify4_driver/postdeployment_config.html","date":null,"categories":[],"body":" Premium feature This section refers to a premium feature. If you are working with the premium cloudify 4 plugin, then you have a feature that allows you to add and execute patches and custom operations on a deployed application. It needs to have a patches server deploy somewhere , and to makes some configurations. Post-Deployment server URL You must provide the endpoint URL of your deployed patches server: postDeploymentRestURL . SSL configurations If your server is deployed with SSL security, and required the clients to authenticate themselves before him, you need to provide alien4cloud with a bunch of security informations when configuring the orchestrator: CA certificate : The authority certificate to refers to trust the patches server; Client key : The key to use to authenticate the client before the patches server; Client certificate : The certificate to use to authenticate the client before the patches server. You must provide them in form of string data values (means you have to open your .pem file with a text editor and copy the content). Make sure that you are in multi-line edition mode, by clicking on the icon next the edition box. "},{"title":"Prerequisites","baseurl":"","url":"/documentation/1.4.0/orchestrators/marathon_driver/prerequisites.html","date":null,"categories":[],"body":"You can start orchestrating containers with the Marathon plugin in just a few simple steps. To operate the plugin, you will need a Mesos cluster running Marathon. You can use an already existing cluster or let Alien4Cloud do all the heavy-lifting and setup one for you. Note that the plugin has been tested only on clusters running on EC2 or Openstack but should work with other IaaS as well as on a bare metal infrastructure. Setting up a Marathon + Mesos cluster using Alien We modeled Mesos, Marathon, the Docker engine and other useful components into TOSCA node types. You can create your own custom Mesos TOSCA composition or use one of the provided templates in the mesos-tosca-types repository . To deploy the cluster, we currently leverage the Cloudify orchestrator. Supported distributions are : Ubuntu 14.04 Debian 7 (wheezy) RedHat 6.2, 7.1 CentOS 6.2, 7.1 Marathon requires Java8+. For Ubuntu 14.04, we use unofficial third-party repositories from webupd8team to install Oracle’s JDK - use it at your own risk. To deploy a Marathon cluster with Alien, assuming you already have a Cloudify orchestrator configured, follow this steps. Import TOSCA definitions for Mesos / Marathon Import the following CSARs into Alien using the GIT importer : The docker-engine archive, from the samples repository (recommended version: master) The mesos-types archive, from the mesos-tosca-types repository (recommended version: 1.2.0) The docker-types archive, from the docker-tosca-types repository (recommended version: 1.1.0 ). Those are not necessary to setup the cluster but are required by the plugin, so you might as well install them now too. Create an Alien application for your cluster Create your own custom Mesos TOSCA composition or use one of the templates present in the mesos-tosca-types repository . Note that if your IaaS doesn’t automatically assigns public IPs you’ll have to add a public network to your topology. We recommend using our latest template, MarathonRexray , which features service-discovery and external storage. Configure the cluster using the MarathonRexray template The cluster designed with our template is ready to use. It provides service-discovery with MesosDNS & MarathonLB as well as external storage with REX-Ray. Scaling and high availability You can configure the compute nodes’ scalable capabilities to fit your needs. Increasing the default_instances property of the MasterCompute node will automatically enable high availability. Increasing the default_instances property of the SlaveCompute nodes will add more slaves to your cluster. If the SlaveCompute also host a MesosDNS component, it will enable high availability for the cluster’s DNS too. Once the cluster is deployed, you can still adjust the number of slaves dynamically by using the scale button in the Runtime view. Configuring REX-Ray REX-Ray’s configuration is required as a Topology Input: Alien will request it from you upon deployment. Both libStorage and REX-Ray use a YAML file for configuration. See their documentation for more info. To simplify things, we scripted REX-Ray clients’ configuration and provided a template for libStorage’s configuration - the RexrayServer node in our template. Remember that in order to operate, libStorage needs your IAAS credentials. libStorage only supports a few storage providers at the moment. You can review available providers here . In the Deployments View, go to the Inputs tab to configure the REX-Ray cluster: fill the storage-provider input property with the storage provider of your choosing. For example, if your going with Amazon’s Elastic Block Storage service, use ebs . upload libStorage’s configuration file using the rexray_server_config input artifact. For EBS, you can use the sample below - just update it with your AWS credentials at the end. For other providers, follow libStorage’s documentation . rexray : modules : default-docker : disabled : true logLevel : warn libstorage : host : tcp://127.0.0.1:7979 embedded : true client : type : controller integeration : volume : operations : # See Marathon & Rexray documentation mount : preempt : true unmount : ignoreusedcount : true server : endpoints : public : address : tcp://:7979 services : ebs : driver : ebs # Refers to storage providers defined bellow # Use this to activate TLS encryption. Equivalent configuration must be set on clients too. # tls: # certFile: /etc/libstorage/libstorage-server.crt # keyFile: /etc/libstorage/libstorage-server.key # trustedCertsFile: /etc/libstorage/trusted-certs.crt # clientCertRequired: true # Define storage providers like this - Example for AWS EBS ebs : accessKey : <your-access-key> secretKey : <your-secret-key> region : <your-region> Hit deploy, sit back and relax. You can also follow our demonstration video . "},{"title":"Prerequisites","baseurl":"","url":"/documentation/1.4.0/orchestrators/cloudify3_driver/prerequisites.html","date":null,"categories":[],"body":"Here are some prerequisite steps that need to be done in order to use the cloudify 3 driver. Install Cloudify 3.4.0 How to install cloudify 3.4.0 is described here. Bootstrap your manager How to bootstrap cloudify 3.4.0 is described here. Note that Cloudify 3.4.0 only support bootstraping on CentOS 7.x or RHEL 7.x. Secure your manager How to secure your manager is described here. Following configuration is only suitable for testing purpose, in production you should customize with your own own configuration. To perform a rapid test of the feature, you can just enable those following keys in the bootstrap inputs admin_username : admin admin_password : admin ssl_enabled : true security_enabled : true In the shell that you’ll use to bootstrap cloudify you should export those following variables export CLOUDIFY_SSL_TRUST_ALL = True export CLOUDIFY_USERNAME = admin export CLOUDIFY_PASSWORD = admin Enable scaling Some additional steps are necessary to use scaling. Please refer to the “Cloudify 3” part to see how you can enable scaling. "},{"title":"Prerequisites","baseurl":"","url":"/documentation/1.4.0/orchestrators/cloudify4_driver/prerequisites.html","date":null,"categories":[],"body":"Here are some prerequisite steps that need to be done in order to use the cloudify 4 driver. Install Cloudify 4.1.1 How to install cloudify 4.1.1 is described here . Bootstrap your manager How to bootstrap cloudify 4.1.1 is described here . Note that cloudify 4.1.1 only support bootstraping on CentOS 7.x or RHEL 7.x. Here a simple snippet to bootstrap the manager: curl -LO http://repository.cloudifysource.org/cloudify/4.1.1/ga-release/cloudify-enterprise-cli-4.1.1ga.rpm sudo rpm -i http://repository.cloudifysource.org/cloudify/4.1.1/ga-release/cloudify-enterprise-cli-4.1.1ga.rpm # Edit /opt/cfy/cloudify-manager-blueprints/simple-manager-blueprint-inputs.yaml # Then bootstrap the manager cd /opt/cfy/cloudify-manager-blueprints sudo cfy bootstrap simple-manager-blueprint.yaml -i simple-manager-blueprint-inputs.yaml Secure your manager Starting from 4.0, Cloudify’s manager is secured by default using a username/password with a default SSL on the nginx for the REST API and the WebUI. At bootstrap, it will generate a private certificate for internal communications between manager’s component and its agents. Following configuration is only suitable for testing purpose, in production you should customize with your own own configuration. To perform a rapid test of the feature, you can just enable those following keys in the bootstrap inputs Setting the crendentials By default, the username is ‘admin’ with a generated password at bootstrap. If you want to set your password, uncomment and edit the following keys in the inputs file. admin_username : admin admin_password : admin Setting the Nginx SSL If not provided, The bootstrap workflow will generate default a key and a certificate for the external communication with NGinx (REST API & WebUI). You can generate and provide your own certificate by copy it into /opt/cfy/cloudify-manager-blueprints/resources/ssl Make sure you name it cloudify_external_cert.pem and cloudify_external_key.pem . Additional configuration steps Cloudify 4.x patches Deployment logs configuration Offline configuration "},{"title":"Deployment logs configuration","baseurl":"","url":"/documentation/1.4.0/orchestrators/cloudify4_driver/prerequisites_logs.html","date":null,"categories":[],"body":"Alien4Cloud 1.4.3 and above: Deployment logs configuration Prior to alien4cloud 1.4.3 , the cloudify 4 orchestrator used to pull logs directly from cloudify event API. However, users noticed that, with a great deal of deployments and usages, some logs were lost and not getting back to alien4cloud. Starting from 1.4.3 to solve the issue, we changed the way we get logs from cloudify, by adding a tier web application alien4cloud-cfy-logs as logs server for alien4cloud, and to be deployed on the Cloudify manager machine. How does it work? alien4cloud-cfy-logs is a web application exposing several endpoints to manage deployments logs. When an orchestrator (cloudify4) is created in alien4cloud, it will register itself to the logs server, and then start an active polling. On the other hand, Cloudify, or more precisely its Logstash component, will push events on the logs server using a specific API endpoint. The server saves logs on the filesystem for every saved orchestrator registration. An orchestrator poll for logs, receive one, and then send an ACK to the log server The server received the ACK, ant then deletes the acknowledged log file for that orchestrator HA concern Everything should work on its own in HA mode. Just make sure to install and configure the logs server on every instance of the Cloudify cluster. Install and configure As stated above, the logs server application is to be installed on the Cloudify manager machine. You can download it alien4cloud-cfy-logs . For now, the Cloudify manager and the logs server should be installed using the same security mode. This means, if you bootstrapped a SSL secured manager (HTTPS), you MUST also install and configure the log application in SSL secured mode. Unzip the file in your prefered location. We will call it $a4c_log_dir The server configuration file is located at $a4c_log_dir/config/alien4cloud-cfy-logs-config.yml . The main property you would want to configure is server.port SSL configuration The logs server can be configured with SSL security, similar config here . However, the Logstash version used in Cloudify cannot push logs over HTTPS protocol. therefore, the log pushing endoint should and is the only one running un HTTP protocol when the log server is secured with SSL. Therefore, remember to configure the property server.http.port , for the HTTP protocol. server : ## HTTPS port port : 8443 ssl : enabled : true key-store : ssl/server-keystore.jks key-store-password : ***** key-password : ***** ## HTTP port. Only if SSL is enabled http : port : 8200 3- Update the configuration of Logstash: * Edit the file /etc/logstash/conf.d/logstash.conf * Find the output section, and add the following (replace HTTP_PORT with the value of: server.http.port if SSL enabled, server.port otherwise): output { http { http_method = > 'post' url = > 'http://localhost:<HTTP_PORT>/api/v1/logs' } [ ... ] } 4- Start the log server : cd $a4c_log_dir ./alien4cloud-cfy-logs.sh 5- Restart logstash service: sudo systemctl restart logstash.service Your logs server is now running and fully able to communicate with both Cloudify and Alien4Cloud. "},{"title":"Offline configuration","baseurl":"","url":"/documentation/1.4.0/orchestrators/cloudify4_driver/prerequisites_offline.html","date":null,"categories":[],"body":"Alien4Cloud wraps the create volume operation of some of the Cloudify’s IaaS plugin to be able to handle existing volumes use cases. For instance, default plugins cannot deploy multiples instances of computes having each instances attached to a volume. The Alien4Cloud’s provider takes advantage of the plugin mechanism of Cloudify to ship our customized wrapper embedded in the blueprint that we send to cloudify. This embedded plugin has some python dependencies, therefore if your Cloudify Manager is bootstrapped on an isolated network (without internet) you need to perform a few configuration steps to be able to deploy volumes. Configure imports in the orchestrator Go to the configuration tab of your orchestrator Then click the target location and to the import field. For instances: locations > openstack > imports Update the value of plugins/overrides/plugin-included.yaml to plugins/overrides/plugin-managed.yaml . Upload the plugin on the manager Download the wagon corresponding to your target IaaS: Amazon Azure Openstack Then upload it on your manager: cfy plugins upload a4c_overrides_openstack-1.3.1-py27-none-linux_x86_64.wgn You can also upload it from the Cloudify’s Webui. "},{"title":"Cloudify 4.x patches","baseurl":"","url":"/documentation/1.4.0/orchestrators/cloudify4_driver/prerequisites_patches.html","date":null,"categories":[],"body":"Patch the manager Some behaviors has changed between Cloudify 3.4 and Cloudify 4.x thus to make it still compatible with Alien4Cloud you need to apply the few following patches. Safe Clean Patch Cloudify now clean its artifacts in the /tmp folder after each executions but since Alien4Cloud did it already we have a conflict. To workaround any issues, make sure to apply the following patch: sudo cp /opt/mgmtworker/env/lib/python2.7/site-packages/script_runner/tasks.py /opt/mgmtworker/env/lib/python2.7/site-packages/script_runner/tasks.py.default sudo curl -L https://raw.githubusercontent.com/alien4cloud/samples/master/org/alien4cloud/automation/cloudify/patches/patch_tasks/playbook/roles/create/files/tasks.py -o /opt/mgmtworker/env/lib/python2.7/site-packages/script_runner/tasks.py sudo rm -f /opt/mgmtworker/env/lib/python2.7/site-packages/script_runner/tasks.pyc IaaS Credentials Cloudify has removed the iaas informations from the manager’s context. Theorically, it now needs to be feeded in each the blueprints you want to deploy. The Cloudify provider for Alien4Cloud do not yet support this new behavior so for now, you will need to configure your manager to set iaas informations into the context We provide a python script to help you configure your manager. curl -LO https://raw.githubusercontent.com/alien4cloud/samples/1.4.0/org/alien4cloud/automation/cloudify/manager/v4/scripts/iaas/cfy_config_iaas.py # sudo /opt/cfy/embedded/bin/python cfy_config_iaas.py -u USERNAME -p PASSWORD --ssl config -c ./iaas_config.yaml -i {aws,openstack,azure} # So for instance if your manager is installed on AWS: sudo /opt/cfy/embedded/bin/python cfy_config_iaas.py -u admin -p admin --ssl config -c ./iaas_config.yaml -i aws A configuration sample iaas_config.yaml for AWS : aws_access_key : 'AWS_ACCESS_KEY' aws_secret_key : 'AWS_SECRET_KEY' aws_region : 'AWS_REGION' agent_keypair_name : 'KEY_PAIR_NAME' agent_security_group_id : 'DEFAULT_AGENT_SECGROUP_ID' # The default sg that will be assigned to each computes that are provisionned by the manager agent_sh_user : 'DEFAULT_AGENT_SSH_USER' agent_private_key_path : 'PATH_TO_AGENT_KEYFILE' A configuration sample iaas_config.yaml for OpenStack : auth_url : 'OS_KEYSTONE_URL' username : 'OS_USERNAME' password : 'OS_PASSWORD' region : 'OS_REGION' tenant_name : 'OS_TENANT_NAME' agent_sh_user : 'DEFAULT_AGENT_SSH_USER' agent_private_key_path : 'PATH_TO_AGENT_KEYFILE' resources : agents_keypair : name : 'AGENT_KEYPAIR_NAME' agents_security_group : name : 'DEFAULT_AGENT_SECGROUP_NAME' # The default sg that will be assigned to each computes that are provisionned by the manager int_network : id : 'MANAGER_NETWORK_ID' name : 'MANAGER_NETWORK_NAME' A configuration sample iaas_config.yaml for Azure : subscription_id : 'YOUR_SUBSCRIPTION_ID' tenant_id : 'YOUR_TENANT_ID' client_id : 'YOUR_CLIENT_ID' client_secret : 'YOUR_CLIENT_SECRET' location : 'YOUR_LOCATION_VALUE' agent_sh_user : 'DEFAULT_AGENT_SSH_USER' agent_private_key_path : 'PATH_TO_AGENT_KEYFILE' "},{"title":"Property definition","baseurl":"","url":"/documentation/1.4.0/devops_guide/tosca_grammar/property_definition.html","date":null,"categories":[],"body":"A property definition defines a named, typed value that can be associated with an entity defined in this specification. It is used to associate a transparent property or characteristic of that entity which can either be set (configured) on or retrieved from it. Keynames Keyname Required Type Description tosca_definitions_version type yes string The required data type for the property. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 description no string The optional description for the property. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 required no (default true) boolean Optional key to define if the property is requied (true) or not (false). If this key is not declared for the property definition, then the property SHALL be considered required by default. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 default no N/A An optional key that may provide a value to be used as a default if not provided by another means. This value SHALL be type compatible with the type declared by the property definition’s type keyname. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 status (1) no string (default: supported) The optional status of the property relative to the specification or implementation. See table below for valid values. N.A. constraints no list of constraints The optional list of sequenced constraints for the property. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 entry_schema (2) no entry schema An optional key used to declare the schema definition for entries of “container” types such as list or map. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 (1) Status has been added in the latest versions of the specification and is not yet supported in alien4cloud. Table below details the supported values as defined in the TOSCA specification. (2) Entry schema definition in TOSCA specification is inconsistent as it is stated you can use a string but the grammar example actually defines a complex object as defined here . Alien4cloud supports the complex definition as stated in the grammar section of the specification. Status valid values: | Value | Description | |:————–|:————| | supported | Indicates the property is supported. This is the default value for all property definitions. | | unsupported | Indicates the property is not supported. | | experimental | Indicates the property is experimental and has no official standing. | | deprecated | Indicates the property has been deprecated by a new specification version. | Grammar <property_name> : type : <property_type> description : <property_description> required : <property_required> default : <property_default_value> constraints : - <property_constraint_1> - ... - <property_constraint_n> Example of a “container” type (list or map) <property_name> : type : <property_type> description : <property_description> required : <property_required> default : <property_default_value> entry_schema : description : <schema_description> type : <entries_type> constraints : - <entries_constraint_1> - ... - <entries_constraint_n> See: constraints Example The following example shows how to define a node type with properties: node_types : fastconnect.nodes.PropertiesSample : properties : property_1 : type : string property_2 : type : string required : false default : This is the default value of the property description : this is the second property of the node constraints : - min_length : 4 - max_length : 8 property_3 : type : integer default : 45 property_4 : type : list entry_schema : type : integer constraints : - valid_values : [ 2 , 4 , 5 , 8 ] "},{"title":"Property filter","baseurl":"","url":"/documentation/1.4.0/devops_guide/tosca_grammar/property_filter_definition.html","date":null,"categories":[],"body":"A property filter definition defines criteria, using constraint clauses, for selection of a TOSCA entity based upon it property values. Grammar Short notation: <property_name> : <property_constraint_clause> <property_name> : - <property_constraint_clause_1> - ... - <property_constraint_clause_n> "},{"title":"Puccini CLI (Beta)","baseurl":"","url":"/documentation/1.4.0/user_guide/puccini_cli.html","date":null,"categories":[],"body":"Puccini CLI is a shell to interact with Puccini orchestrator. Run Puccini CLI The simplest way to get the CLI is to launch the Getting Started script. Then you could go to cd alien4cloud-getstarted/puccini-cli-${VERSION}/bin/ , once inside the folder you can launch ./tdk.sh . The script will start an interactive shell with history, auto-completion. Don’t hesitate to type TAB or CTRL+R when you are in the shell mode. // List all available commands help // Get help for a specific command. You should have information about sub-commands ' syntax and what each sub-command does ... help agent Quick getting started The below instructions will help you to deploy a sample web app hosted on Tomcat, and put behind an apache load balancer. The commands in this example suppose that you have all your CSARs inside a folder named Projects/ , which is at the same level as puccini-cli-${VERSION}/ . Please take note that the path here is not important, from the moment when you give the correct path it will work (can be absolute path). Download Docker specific topology and put it in Projects/puccini_topology/ . Fetch samples # Clone samples cd Projects/ git clone https://github.com/alien4cloud/alien4cloud-extended-types.git git clone git clone https://github.com/alien4cloud/samples.git # Launch Puccini CLI cd ../puccini-cli- ${ VERSION } /bin ./tdk.sh Inside Puccini’s interactive shell # Install some types necessary to deploy a topology apache load balancer csar install ../../Projects/alien4cloud-extended-types/alien-base-types/ csar install ../../Projects/samples/org/alien4cloud/lang/java/pub/ csar install ../../Projects/samples/org/alien4cloud/lang/java/jdk/linux/ csar install ../../Projects/samples/tomcat-war/ csar install ../../Projects/samples/apache-load-balancer/ csar install ../../Projects/samples/topology-load-balancer-tomcat/ # Create a deployment image deployment create aplb ../../Projects/puccini_topology/ # Create agent to deploy agent create aplb # Follow the log of deployment agent log aplb # Show outputs agent info aplb Get the URL to access to the web app from the deployment’s output, paste it in your browser and enjoy ! You can continue to check the next sections to have more details about the CLI’s usage Advanced usage The common workflow to use puccini to develop recipe / manage deployments is: Install CSARs Develop your tosca recipe and topology with pure abstract native types tosca.nodes.Compute , tosca.nodes.Network … then install it. csar install /home/bobo/Documents/fastconnect/samples/org/alien4cloud/lang/java/pub ... csar install /home/bobo/Documents/fastconnect/samples/topology-load-balancer-tomcat Create the puccini specific topology to map all abstract native nodes to puccini native nodes. As an example, you can see the abstract topology use abstract tosca.nodes.Compute . The AWS specific topology use concrete type org.alien4cloud.puccini.aws.nodes.Instance . The Docker specific topology use concrete type org.alien4cloud.puccini.docker.nodes.Container . The Openstack specific topology use concrete type org.alien4cloud.puccini.openstack.nodes.Compute . The specific topology import the abstract topology and override nodes and inputs with the same names, you can add more nodes in the specific topology. imports : - war-apache-load-balanced-topology:* topology_template : inputs : - ... node_templates : WebServer : type : org.alien4cloud.puccini.docker.nodes.Container To display all installed csars. csar list Puccini CLI can handle multi-modules project. If you have a project which contains multiple modules (CSAR), Puccini CLI can parse, construct the dependency graph, and install all modules in correct order. // Here, we install all the csar artifacts found in the project samples project build /home/bobo/Documents/fastconnect/samples Create a deployment You can create a deployment from your specific Puccini topology created in the precedent step so that Puccini knows which IAAS provider to target. The deployment will then be created with the specific topology and so concrete type as org.alien4cloud.puccini.aws.nodes.Instance will be instantiated. Basically, a deployment is a docker image with every necessary resources to instantiate the topology. deployment create myDeployment /home/bobo/Documents/fastconnect/puccini-topology To display all the existing deployments. deployment list Launch the deployment From deployment images, you can create deployment agents (micro managers) which are docker containers that handle the lifecycle of your application. You can deploy/undeploy/scale your application thanks to the agents. # Create agent (docker container) from deployment image and run install workflow agent create myDeployment To display all the existing agents. agent list Tail deployment’s log We can see the log when the deployment is running. agent log myDeployment # To filter the log whose type is workflow_event. agent log --logType = workflow_event myDeployment Show deployment’s information # Show all deployment information such as node, instances, executions, outputs ... agent info myDeployment Scale # Scale myNode to a new instance count of 2 agent scale myDeployment myNode 2 Update and resume With agent log myDeployment or agent info --executions myDeployment , you might observe that your deployment has failed. You might be able to fix your recipe and then hot reload it with puccini. Once recipe is updated, if you took care to make your operation idempotent, you can resume the execution from the failure point (the operation in failure will be re-executed). Even if your operation did modify the state of the machine in a way that it cannot be resumed, you can just connect to the machine, put the machine back in a clean state and then resume (if we suppose that it takes less time than re-run the whole deployment from the beginning). # After fixing your recipe, update the csar installed in the local repository csar install myCsar # Regenerate the deployment image, update the work folder deployment create my_deployment # Take note that you can bypass the image generation by modifying directly compiled csar inside path_to_puccini/work/myDeployment/recipe/src/main/resources, but then next 'agent create myDeployment' will not use the updated recipe # Update recipe on the agent agent update my_deployment # Resume the the deployment from the last failure point agent resume my_deployment Undeploy We can undeploy the application after it is deployed. # Launch uninstall workflow and delete the agent agent delete myDeployment # Tear down the infrastructure and delete the agent (only delete IAAS components as compute network) agent delete -f myDeployment # Delete the agent and ignore the deployment agent delete --ignore-deployment myDeployment Deployment with other clouds The example, which was given util now deploys on docker as the default puccini provider. You might want to work with one of the supported IAAS Openstack, AWS or byon. Configure the provider at path_to_puccini/conf/providers/${provider_name}/${target_name} following the template file provider.conf.tpl , you must then rename it to provider.conf . As you can see as provider, you have the choice between Openstack, AWS and docker. Target enables you to have multiple configurations for the same IAAS. In all of your commands when no target is specified, puccini takes the target named default . If you don’t want to configure statically inside Puccini, you can configure those configurations as inputs of your deployment. Create your puccini topology for the configured cloud by importing puccini provider types: imports : ... - puccini-aws-provider-types:* node_templates : WebServer : type : org.alien4cloud.puccini.aws.nodes.Instance Create your deployment and your agent in the same manner as with docker provider. An example of a deployment AWS can be found here Ansible support Puccini supports Ansible as an artifact executor. To rapidly test Ansible with Puccini : # Use this new image as base image to build all micro manager from now on, the image has ansible installed deployer use alien4cloud/puccini-deployer-ansible:1.4.0 # Install your types which uses Ansible csar install path_to_your_ansible_types # Create the deployment image deployment create --input = path_to_aws_inputs.yml my_deployment path_to_aws_topology # Run your deployment "},{"title":"Puccini (Beta)","baseurl":"","url":"/documentation/1.4.0/orchestrators/puccini/puccini_main_page.html","date":null,"categories":[],"body":"Puccini is an orchestrator multi-cloud that aims to support deployment on various different locations. This section gives the details to Puccini orchestrator plugin for Alien4Cloud. Alien4Cloud Puccini support The alien4cloud puccini plugin exposes several nodes so that TOSCA templates can be deployed on various locations, such as Amazon , Openstack , etc… See Supported locations for more details. Orchestrator configuration To be able to use puccini orchestrator plugin, you need to download the puccini package. And then go to Driver configuration and fill in the pucciniHome with the absolute path to the puccini package . "},{"title":"Relationship type","baseurl":"","url":"/documentation/1.4.0/devops_guide/tosca_grammar/relationship_type.html","date":null,"categories":[],"body":"A Relationship Type is a reusable entity that defines the type of one or more relationships between Node Types or Node Templates. Keynames Keyname Required Type Description tosca_definitions_version derived_from no string An optional parent Relationship Type name the Relationship Type derives from. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 version (1) no version An optional version for the Entity Type definition. N.A. metadata (2) no map of string Defines a section used to declare additional metadata information. alien_dsl_1_3_0 tosca_simple_yaml_1_0 tags (2) no map of string Defines a section used to declare additional metadata information. alien_dsl_1_3_0 alien_dsl_1_2_0 description no string An optional description for the Relationship Type. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 attributes no map of attribute definitions An optional list of attribute definitions for the Relationship Type. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 properties no map of property definitions An optional list of property definitions for the Relationship Type. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 interfaces no interface definitions An optional list of named interfaces for the Relationship Type. alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 valid_target_types yes string[] A required list of one or more valid target entities or entity types (i.e., a Node Types or Capability Types). alien_dsl_1_3_0 alien_dsl_1_2_0 tosca_simple_yaml_1_0 (1) version at type level is defined in TOSCA but they are optional and there is no example on how it should be managed. We believe in alien4cloud that versions should be managed at the service template/archive level and dispatched to every elements defined in the service template/archive. (2) metadata appeared in TOSCA while alien4cloud already had tags supported, support for metadata keyword has been added in 1.3.1 version. note that if you specify both metadata and tags one may silently override the other (this should be avoided). Grammar <relationship_type_name> : derived_from : <parent_relationship_type_name> description : <relationship_description> properties : <property_definitions> attributes : <attribute_definitions> interfaces : <interface_definitions> valid_target_types : [ <entity_name_or_type_1> , ... , <entity_name_or_type_n> ] See: property_definitions attribute definitions interface definitions Example mycompanytypes.myrelationships.AppDependency : derived_from : tosca.relationships.DependsOn valid_target_types : [ mycompanytypes.mycapabilities.SomeAppCapability ] "},{"title":"Repository definition","baseurl":"","url":"/documentation/1.4.0/devops_guide/tosca_grammar/repository_definition.html","date":null,"categories":[],"body":"A repository definition defines a named external repository which contains deployment and implementation artifacts that are referenced within the TOSCA Service Template. Keynames Keyname Required Type Description tosca_definitions_version description no string The optional description for the repository. alien_dsl_1_3_0 tosca_simple_yaml_1_0 url yes string The URL or network address used to access the repository. alien_dsl_1_3_0 tosca_simple_yaml_1_0 type (1) no string Repository type is an optional string to find how to fetch elements from a repository. alien_dsl_1_3_0 credential no credential The optional credential used to authorize access to the repository. alien_dsl_1_3_0 tosca_simple_yaml_1_0 (1) type keyname is specific to alien 4 cloud and tells alien4cloud how to fetch artifacts besides the protocol information and how to use specific artifact notations that are more in line with the desired user point of view notation (like for maven artifacts). This keyname is optional and alien can also find this information based on configured repositories. Repositories available in Alien 4 cloud Alien 4 cloud premium support the following external repository : http, git maven. Grammar # Simple definition is as follows: <repository_name> : <repository_address> # The full definition is as follows: <repository_name> : description : <repository_description> url : <repository_address> credential : <authorization_credential> See: artifact_definitions Example The following represents a repository definition: repositories : docker_hub : https://registry.hub.docker.com/ script_repo : url : https://myCompany/script credential : good_user:real_secured_password nexus_artifact_repo : url : https://fastconnect.org/maven/content/repositories/fastconnect credential : bad_user:real_secured_password git_repo : url : https://github.com/myId/myRepo.git "},{"title":"Requirement definition","baseurl":"","url":"/documentation/1.4.0/devops_guide/tosca_grammar/requirement_definition.html","date":null,"categories":[],"body":"A requirement definition allows specification of a requirement that a node need to fulfill to be instanciated. alien_dsl_1_2_0 and older versions Requirement definition syntax in TOSCA has been quite volatile in the working draft of version 1.0.0 of the specification. alien_dsl_1_2_0 and previous supported tosca_definitions_version in alien4cloud used a different syntax and while requirement definition was supported it is not mentioned here. You should look at the migration guide for more information. tosca_simple_yaml_1_0 and alien4cloud 1.4.0 A bug in alien4cloud 1.4.0 prevent the tosca_simple_yaml_1_0 to use the standard definition of requirement definition. Insead alien 4 cloud will parse the alien_dsl_1_2_0 and working draft version of a requirement definition. The bug is fixed in alien4cloud 1.3.1. Keynames Keyname Required Type Description tosca_definitions_version capability yes string The required reserved keyname used that can be used to provide the name of a valid Capability Type that can fulfill the requirement. alien_dsl_1_3_0 tosca_simple_yaml_1_0 description (1) no string The optional description of the Requirement definition. alien_dsl_1_3_0 tosca_simple_yaml_1_0 node no string The optional reserved keyname used to provide the name of a valid Node Type that contains the capability definition that can be used to fulfill the requirement. alien_dsl_1_3_0 tosca_simple_yaml_1_0 node_filter (2) no Node filter The optional filter definition that defines a type-compatible target node that can fulfill the requirement. alien_dsl_1_3_0 relationship no string The optional reserved keyname used to provide the name of a valid Relationship Type to construct when fulfilling the requirement. alien_dsl_1_3_0 tosca_simple_yaml_1_0 occurrences no (defaults to [1,1]) range of integer Lower boundary by which a requirement MUST be matched. Valid values are any positive number, 0 meaning that the requirement is optional. Defaults to 1. and Upper boundary by which a requirement MUST be matched for Node Templates. Valid values are any positive number or unbounded string that means that there is no upper limit. Defaults to 1. alien_dsl_1_3_0 tosca_simple_yaml_1_0 (1) Description keyname is missing in TOSCA specification but it sounds more like a miss than an intend and we decided to include it also in our tosca_simple_yaml_1_0 support. (2) Node filter in TOSCA specification is used only to specify dangling requirements. Basically on a node template you can use a node filter on a requirement assignment to specify to the orchestrator that it should connect the node template to any suitable instance of node that he can provide. Grammar # using type <requirement_name> : capability : <capability_type_name> node : <node_type_name> relationship : <relationship_type_name> occurrences : [ <min_occurrences> , <max_occurrences> ] Example node_types : fastconnect.nodes.RequirementSample : requirements : - host : capability : tosca.capabilities.Container node : tosca.nodes.Compute relationship : tosca.relationships.HostedOn occurrences : [ 0 , unbounded ] "},{"title":"Roles","baseurl":"","url":"/documentation/1.4.0/concepts/roles.html","date":null,"categories":[],"body":"Roles in Alien can be mapped to any user and a user can of course have any rôle and multiple ones on any resource, however we will explain here how we defined the roles and how they can map to an enterprise organization by giving to every people involved in IT creation and/or deployment and/or consumption the right focus, visibility and access to resources. In a standard IT organization we have identified several experts profile: Some people working at a cross business and applications level: Platform admin are responsible of setting up the clouds or deployment environments. Dev-Ops ensure that software can be easily deployed on platforms and follow the company best-practices. It is important to note that a single software may be composed of several elements that have to run on multiple machines. This is especially true when we want to ensure that the software will be able to support H.A. and scalability requirements. Software Architects are responsible to software architectures, they build application topologies and ensure that best-practices are followed by the various teams in the enterprise. And some working on dedicated application(s) and project(s) Project management team (product owner, scrum master etc.) is responsible for an application. they coordinates the teams interracting on the application, plan versions and releases, define the environment requirements etc. Developer(s) are responsible for building the application and creation of the next versions. Support Engineers want to be able to deploy any version currently used by clients (or business teams) to be able to reproduce issues, find workaround etc. Q.A. Engineers are responsible to test the up-comming release and make sure that it pass the quality standard of the enterprise and won’t create issues. Of course test automation is a critical aspect of their jobs and being able to deploy complex applications easily is a part of this automation. Production Ops are responsible for running the production of one or multiple project. They are the one responsible for everything that is related to a production environment, tuning, deployments, version upgrade, solving live issues etc. Users , well they use the application… What they want is to find an easy way to access their application and find resources they need. Of course these profiles are not exclusive and a single person can handle or be expert in several profiles, for example it is quite common to have a Production Ops being also Dev-Ops. Alien 4 Cloud intend to provide a platform that will help all these people to collaborate to build the enterprise IT in a flexible manner. So the question you all want to know is: How does we map this into Alien 4 Cloud ? ADMIN will be able to configure one or multiple deployments targets ( clouds ). And of course associate deployment roles to specific users. COMPONENTS_MANAGER will be able to define packages on how to install, configure, start and connect components (mapped as node types). ARCHITECTS will be able to define global topologies of applications by reusing building blocks (node types defined by components managers). APPLICATIONS_MANAGER will be able to define applications with it’s own topologies that can be linked to a global topology from architects and that can reuse components defined by the components managers. At the application level, several users will be able to collaborate. "},{"title":"SAML integration","baseurl":"","url":"/documentation/1.4.0/admin_guide/saml.html","date":null,"categories":[],"body":" Premium feature This section refers to a premium feature. Alien 4 Cloud support SAML authentication, this section describe the configuration for SAML enablement. Configuration SAML configuration can be done by updating the configuration of alien4cloud yaml configuration file (config/alien4cloud-config.yml). Enabled flag must of course be configured to true. You should then configure the various options available from the configuration file: saml : enabled : false maxAuthenticationAge : 7200 maxAssertionTime : 3000 # logoutUrl: http://alien4cloud.org # proxy: # host: 193.56.47.20 # port: 8080 ssl : keystore : samlKeystore.jks defaultKey : apollo keystorepassword : nalle123 metadata : idp : url : \"https://idp.ssocircle.com/idp-meta.xml\" # file: \"/path/to/file.xml\" sp : entityId : \"org:alien4cloud:sp\" # entityBaseURL: defaults to localhost:8088 # requestSigned: # wantAssertionSigned: # mapping: # email: EmailAddress # firstname: FirstName # lastname: LastName maxAuthenticationAge and maxAssertionTime allows to configure SAML message validation in alien4cloud so it accepts SAML responses that allows long duration user sessions (meaning authentication on the IDP could be quite old). Once alien4cloud is started you can retrieve alien’s Service Provider metadata from http(s)://alien4cloud.host:alien4cloud.port/saml/metadata. 1.3.1 new parameters logoutUrl and attribute mappings are new options in alien4cloud 1.3.1. LogoutUrl allows to specify an url on which the user will be redirected after performing a logout from alien4cloud when SAML is enabled. Attribute mapping allow alien4cloud to configure the user from values of attributes send in the SAML assertion by the IDP. The mapping should contains the name of the attribute from which to fetch the email, firstname or lastname. You can use all or only some of the attributes. "},{"title":"Security","baseurl":"","url":"/documentation/1.4.0/admin_guide/security.html","date":null,"categories":[],"body":"The platform Alien contains multiple components, each component has its own security policy and mechanism. On most of the component, Alien use SSL mutual authentication, or SSL + login. Alien4Cloud UI and Rest API Elasticsearch Orchestrators security Cloudify 3 Post deployment Web Application "},{"title":"Elastic Search","baseurl":"","url":"/documentation/1.4.0/admin_guide/security_elastic_search.html","date":null,"categories":[],"body":" Premium feature This section refers to a premium feature. Prerequisite Generate certificates for your elasticsearch cluster (see Certificate Generation ), and download premium distribution of Alien4Cloud Configuration of elasticsearch nodes Download elasticsearch 1.7.0 here , and install it at $ELASTIC_SEARCH_HOME Download the plugin search-guard-ssl (this is a backport of the plugin to work with elasticsearch 1.7.0) Move to $ELASTIC_SEARCH_HOME/bin, perform following command to install the plugin to your elasticsearch installation ./plugin --install search-guard-ssl --url file:// $SEARCH_GUARD_PROJECT /target/releases/search-guard-ssl-1.7.0-SNAPSHOT.zip Copy your certificates to $ELASTIC_SEARCH_HOME/config Modify $ELASTIC_SEARCH_HOME/config/elasticsearch.yml, add following section for search-guard ssl (it’s just a sample, feel free to modify it to follow your cluster architecture, all available configuration keys can be found here ) : cluster.name : my-cluster network.host : _eth0:ipv4_ searchguard.ssl.http.clientauth_mode : REQUIRE searchguard.ssl.http.enable_openssl_if_available : false searchguard.ssl.http.enabled : true searchguard.ssl.http.keystore_filepath : server-keystore.jks # Keystore password (default: changeit) searchguard.ssl.http.keystore_password : changeit searchguard.ssl.http.truststore_filepath : server-truststore.jks # Truststore password (default: changeit) searchguard.ssl.http.truststore_password : changeit searchguard.ssl.transport.enable_openssl_if_available : false searchguard.ssl.transport.enabled : true searchguard.ssl.transport.enforce_hostname_verification : true searchguard.ssl.transport.resolve_hostname : false searchguard.ssl.transport.keystore_filepath : server-keystore.jks # Keystore password (default: changeit) searchguard.ssl.transport.keystore_password : changeit searchguard.ssl.transport.truststore_filepath : server-truststore.jks # Truststore password (default: changeit) searchguard.ssl.transport.truststore_password : changeit discovery.zen.ping.unicast.hosts : [ \"10.67.79.5\" ] discovery.zen.ping.multicast.enabled : false discovery.zen.ping.unicast.enabled : true index.number_of_replicas : 1 Perform the same operations for all your elasticsearch cluster nodes Start your elasticsearch cluster Configuration of Alien In $ALIEN_HOME/config/alien4cloud-config.yml, configure Alien as an elasticsearch transport client: elasticSearch : clusterName : my-cluster local : false client : true prefix_max_expansions : 10 transportClient : true resetData : false # a comma separated list of host:port couples hosts : 10.67.79.4:9300,10.67.79.5:9300 Modify the configuration for elasticsearch $ALIEN_HOME/config/elasticsearch.yml to suit your need (all available configuration keys can be found here ): path : conf : config gateway : recover_after_nodes : 1 expected_nodes : 1 # bind only to localhost, so we aren't visible and we don't multicast discover others network.host : _eth0:ipv4_ searchguard.ssl.http.clientauth_mode : REQUIRE searchguard.ssl.http.enable_openssl_if_available : false searchguard.ssl.http.enabled : true searchguard.ssl.http.keystore_filepath : client-keystore.jks # Keystore password (default: changeit) searchguard.ssl.http.keystore_password : changeit searchguard.ssl.http.truststore_filepath : server-truststore.jks # Truststore password (default: changeit) searchguard.ssl.http.truststore_password : changeit searchguard.ssl.transport.enable_openssl_if_available : false searchguard.ssl.transport.enabled : true searchguard.ssl.transport.enforce_hostname_verification : true searchguard.ssl.transport.resolve_hostname : false searchguard.ssl.transport.keystore_filepath : client-keystore.jks # Keystore password (default: changeit) searchguard.ssl.transport.keystore_password : changeit searchguard.ssl.transport.truststore_filepath : server-truststore.jks # Truststore password (default: changeit) searchguard.ssl.transport.truststore_password : changeit Start Alien, if index is created then your configuration is correct and working ! "},{"title":"Alien Post-Deployment","baseurl":"","url":"/documentation/1.4.0/admin_guide/security_patch.html","date":null,"categories":[],"body":" Premium feature This section refers to a premium feature. When deploying the post deployment web application, it is recommended to enable SSL to secure the communication with Alien4Cloud. For that to be done, you have to create a keystore, and eventually a truststore in case of mutual authentication (see Certificate Generation ) and configure the application with the proper SSL properties. Configure the post-deployment web application The post-deployment web application is a Spring boot application, thus, there are some properties that needs to be set into the Java JVM running the application. Two cases: you are using a configuration file alien4cloud-post-deployment-config.yml : then you should add the options under the in the server section you are not using a configuration file: you should set your java command options. In case you only want the server to be authenticated before the clients, you need specify the location of the keystore, by adding the following options, server : port : 8080 # You might want to change the port to a normalize secured one [ ... ] ssl : # Make sure to change the path to a good one key-store : <relative/path/to/keystore.jks> key-store-password : ****** key-password : ****** Note that if you do not want to perform mutual authentication between Alien4Cloud and the post deployment web application, you should skip this step. server : [ ... ] ssl : [ ... ] # Make sure to change the path to a good one trust-store : relativepath/to/your/truststore/server-truststore.jks trust-store-password : ****** # to require client authentication client-auth : need -Dserver.ssl.key-store = path/to/your/server-keystore/server-keystore.jks -Dserver.ssl.key-store-password = keyStore-password -Dserver.ssl.key-password = key-password Note that if you do not want to perform mutual authentication between Alien4Cloud and the post deployment web application, you should skip this step. -Dserver.ssl.trust-store = path/to/your/truststore/server-truststore.jks -Dserver.ssl.trust-store-password = trustStore-password // the following option is to require client authentication -Dserver.ssl.client-auth = need Configure Alien4Cloud UI / Rest API You have to modify the launch command to add the following java options, so that Alien4Cloud trust the certificate of the post deployment web application : -Djavax.net.ssl.trustStore = path/to/your/truststore/server-truststore.jks -Djavax.net.ssl.trustStorePassword = trustStore-password Note that if you do not want to perform mutual authentication between Alien4Cloud and the deployment web application, you should skip this step. -Djavax.net.ssl.keyStore = path/to/your/client-keystore/client-keystore.jks -Djavax.net.ssl.keyStorePassword = keyStore-password "},{"title":"Alien UI / Rest API","baseurl":"","url":"/documentation/1.4.0/admin_guide/security_ui_rest.html","date":null,"categories":[],"body":"HTTPS Alien4Cloud UI and Rest API is secured by credentials, client must perform login to authenticate himself. By default Alien 4 Cloud starts using http rather than https. Enabling SSL is however really simple. Just edit the alien4cloud-config.yml and replace: server : port : 8080 by server : port : 8443 ssl : key-store : keystore.jks key-store-password : ****** key-password : ****** More informations on SSL configuration can be found here . Make sure that you have your key store placed along-side the alien4cloud war file: ├── alien4cloud.sh ├── alien4cloud-ssl.sh ├── alien4cloud-ui- { version } -standalone.war ├── keystore.jks ├── config/alien4cloud-config.yml ├── config/elasticsearch.yml Flags secure and http-only To enforce security, you can also prevent attack on cookie. To do this, in alien4cloud-config.yml , replace: server : port : 8443 ssl : key-store : keystore.jks key-store-password : ****** key-password : ****** by server : port : 8443 ssl : key-store : keystore.jks key-store-password : ****** key-password : ****** session : cookie : http-only : true secure : true "},{"title":"Services","baseurl":"","url":"/documentation/1.4.0/concepts/services.html","date":null,"categories":[],"body":"Services in alien4cloud design any already running resource (databases, application providing an API etc.) that can be used by applications through matching of abstract nodes (just like on demand resources). The fundamental difference between service and on-demand resource is the ownership of the resource lifecycle. While on-demand resources lifecycle is managed by the consuming application, services are elements external to the application but yet consumed by the application. For example you may have an on-demand database, which will be created when you deploy the application and (eventually) deleted when the application will be un-deployed. When using a service you expect someone else to start the service (either externally to alien4cloud or through an alien deployment) and just consume it. In any case you will not be the owner of the service lifecycle. When a service is matched against an abstract node of the topology the lifecycle of this node is overriden, some relationship operations won’t be executed and the other service side operations will be overriden if any where defined. This is a really different mechanism as the service lifecycle is not managed by the consumer but is owned by the service provider. However the consumer is still responsible for providing a relationship with the implementation of it’s own side of the relationship. Service security Access to services is configured by the admin with the same options as on demand resources. They can be accessible to some users, group of users, applications or specific environments. Read more about… Turning deployments into services While the platform admin can configure external services (deployments not managed by alien but performed manually outside of alien4cloud) manually in the UI, it may be useful to turn an alien4cloud deployment into service. Before deployment the deployer user can easily check an option to turn it’s deployment into a service. However the checkbox will be available only if the devops (that configured the topology) has defined a substitution for the topology. Indeed from the outside world the full deployment is seen as a single node to be consumed without all it’s potential complexity. Read more about… Service accessibility Services may be available only to some of the locations either for segregation or because some network settings may prevent their consumption from some locations. The configuration of locations is manual and should be done by a platform administrator. Note that services created from deployments are automatically available on the location selected for the deployment once deployed. Read more about… "},{"title":"Services management","baseurl":"","url":"/documentation/1.4.0/user_guide/services_management.html","date":null,"categories":[],"body":" Services in alien4cloud design any already running resource (databases, application providing an API etc.) that can be used by the applications through matching of abstract nodes (just like on demand resources). The fundamental difference between service and on-demand resource is the ownership of the resource lifecycle. While on-demand resources lifecycle is managed by the consuming application, services are elements external to the application but yet consumed by the application. For example you may have an on-demand database, which will be created when you deploy the application and (eventually) deleted when the application will be un-deployed. When using a service you expect someone else to start the service (either externally to alien4cloud or through an alien deployment) and just consume it. In any case you will not be the owner of the service lifecycle. Referencing external services in alien4cloud The first method to define a service in alien4cloud is to declare manually a service. In order to do this, click on [Administration] > Service Let’s say you have a Mongodb database that you want to expose to other applications, you can drag the component mongod-type and drop it on the demarcated zone on the left to create the service Mongo. It’s not shown in the image but mongod-type derived from AbstractMongod After the creation, the service appears on the left hand side list and can be configured. Service details Click on the service to see its details. Here the status does not have much sense, a service in enabled state cannot be modified and deleted (it will make more sense with a service exposed by a deployment as Alien4Cloud knows the state of the service). Instance informations The instance tab gives you access to properties and attributes of your service. The properties and attributes values, that you enter here, can be used later by consumer of the service to establish the connection with the service. In the example, the property ip_address of the external Mongo DB has been given. IP Address When a service is not managed by A4C you need to define manually how to access the service (i.e. the IP address) Capabilities attributes Capabilities attributes are very important as they will provide for infos on how to use / connect to the service. Alien4Cloud 1.4 does not yet support capabilitie attributes edition. Thus, you cannot define for example the endpoint ip_address on the endpoint itself. To do that, you need to add an attribute on the service node type level following the naming convention: capabilities.YOUR_CAPABILITY_NAME.ATTRIBUTE_KEY . For example in our case of Mongo service, we would define the capability service_api attribute ip_address like this: Enable service on locations The location tab allows to authorize service access to locations. It means only application deployed on the authorized locations can have access to the service. Security The security tab allows to authorize service access to application / environments / environment types / user or group. Only authorized entity can use the service on deployment. Once the service has been properly defined, authorization has been properly configured. You can begin to consume it with your application. As you can see, the example use the abstract type AbstractMongod which is the base type that was used to define the service. On matching screen, your AbstractMongod will be matched to the service that has been defined lately Turning deployments into services Turning deployments into services is usually done by the deployment manager of the application environment. Service definition within Alien4Cloud uses substitution in order to expose properties, requirements or capabilities. The first thing you need to do is to define your service topology, and a substitution for it Once the service topology is done, you can set up an environment to be deployed on your chosen location. Before deploying, you can choose to expose your deployment as a service in the Service management section As an admin, the service exposed will be displayed automatically in the service list [Administration] > Service Once the service is exposed, access to the service, the matching can be configured in the same manner as an external service. However, note that the exosed services are automatically available on the location selected for the deployment once deployed. Limitations multiple capabilities exposure On a topology, a service can expose multiple capabilities, but only 1 can be used. Otherwise, relationship operations won’t be called properly. More precisely, asume we have the following node types: nodes types : # abstract type, on which the service will be based org.alien4cloud.nodes.AS abstract : true capabilities : ACapa : org.alien4cloud.capabilities.ACapa BCapa : org.alien4cloud.capabilities.BCapa org.alien4cloud.nodes.A requirements : AReq : org.alien4cloud.capabilities.ACapa org.alien4cloud.nodes.B requirements : BReq : org.alien4cloud.capabilities.BCapa wont work : One node AS, connect both A and B to AS #topology node_templates : # one abstract node template, will be matched to a service on deployment config AS : type : org.alien4cloud.service.AS A : type : org.alien4cloud.nodes.A requirements : AReq : node : AS # connect A to AS capability : org.alien4cloud.capabilities.ACapa B : type : org.alien4cloud.nodes.B requirements : BReq : node : AS # also connect B to AS capability : org.alien4cloud.capabilities.BCapa Workaround : Two nodes of type AS, connect each A and B to different node templates of type AS. Match the two AS type nodetemplates to the same service on deployment #topology node_templates : # TWO abstract nodes templates, each will be matched to the SAME service on deployment config AS1 : type : org.alien4cloud.service.AS AS2 : type : org.alien4cloud.service.AS A : type : org.alien4cloud.nodes.A requirements : AReq : node : AS1 # connect A to AS1 capability : org.alien4cloud.capabilities.ACapa B : type : org.alien4cloud.nodes.B requirements : BReq : node : AS2 # connect B to AS2 capability : org.alien4cloud.capabilities.BCapa Scalability and services Services substitution does not yet supports the exposure of multiple instances. Output properties cannot reference properties of scaled instances. Services properties Input properties are used both for topology input and deployment inputs. Users SHOULD handle connection to services using capabilities properties/attributes and eventually node attributes . They SHOULD NOT use node properties for that purpose. Services from snapshot versions While creation of services out of snapshot types is possible, it is not recommended for two reasons: We believe it is not a good practice to interact with other teams based on unstable / unreleased features. Alien4cloud may not handle node type updates correctly and such usage is done at your own risks. Example The MongoDB service example in the two sections above can be found here . It comes with a topology template mongod-type that defines a simple topology containing a MongoDB hosted on a Compute. This template is exposed as a type named mongod-type using substitution exposition. The node cellar application, which consumes the service, can be found here . In the topology template Nodecellar-ClientService , the abstract node alien.nodes.AbstractMongod was matched to the MongoDB service. Advanced Relationship and service TLDR; When a relationship is involving a service, think “half relationship”, one half relationship is managing the source side, and the other one the target side. Services are running resources managed by third parties, as such only the third party is authorized to manage the service settings. In order to protect the integrity of: services, service consumers and service providers, we cannot let everybody alters a service and vice versa. Only operations defined by the owner can be executed on it. For example: a service consumer is not allowed to execute operations on the service side. In Tosca, the source of the relatonship is responsible to define operations executed on both side (source and target), this is a problem when a service is involved. The solution chosen by A4C is to: let the administrator configures the source with one relationship that holds all the operations impacting the source and another one for the target with operations that will impact the target. not execute any operations impacting a side we don’t manage When a service is involved, a relationship should be seen as a half relationship. One part of the relationship is defined by the consumer and the other part by the provider. At runtime the 2 relationships are merged into 1. How to define relationships on the service side Depending if the service is a consumer or a provider you will need to add a half relationship on a service capability or a service requirement. In the case of the MongoDB service described above we need to add the relationship to the “database_endpoint” capability. Let’s say we’d like to run a script everytime a new consumer is added. For that we need to create a relationship dedicated to the service side: tosca_definitions_version : alien_dsl_1_3_0 description : A relationship definition for the service side template_name : mongo_db_relaitonship_service_side template_version : 0.1.0-SNAPSHOT template_author : admin imports : - tosca-normative-types:1.0.0-ALIEN12 relationship_types : org.alien4cloud.relationships.NodejsConnectToMongoServiceSide : derived_from : tosca.relationships.ConnectsTo description : Relationship use to define operations run on the mongo side interfaces : Configure : add_source : implementation : scripts/when_new_source.sh And upload this yaml as we would do if it was a component. Then attach the NodejsConnectToMongoServiceSide to the service capability “database_endpoint” : And that’s it. "},{"title":"Suggestions","baseurl":"","url":"/documentation/1.4.0/user_guide/suggestion.html","date":null,"categories":[],"body":"Suggestions provide default values for some usual fields. The suggestions are available on the properties architecture , type and distribution of the capability tosca.capabilities.OperatingSystem . For example, when you set the distribution property of a Compute, Alien will suggest some value. If your value is not in the suggestions, a modal will appear and you can add the new value to the suggestions values. "},{"title":"Supported locations (IaaSs)","baseurl":"","url":"/documentation/1.4.0/orchestrators/cloudify3_driver/supported_locations.html","date":null,"categories":[],"body":"In his current state, this provider allows you to deploy your application on several IaaSs, called locations . Amazon Azure ( Premium ) OpenStack vSphere ( Premium ) you can even Bring Your Own Node "},{"title":"Supported locations (IaaSs)","baseurl":"","url":"/documentation/1.4.0/orchestrators/cloudify4_driver/supported_locations.html","date":null,"categories":[],"body":"In his current state, this provider allows you to deploy your application on several IaaSs, called locations . Amazon Azure ( Premium ) OpenStack vSphere ( Premium ) you can even Bring Your Own Node "},{"title":"Supported locations (Beta)","baseurl":"","url":"/documentation/1.4.0/orchestrators/puccini/supported_locations.html","date":null,"categories":[],"body":"In his current state, this provider allows you to deploy your application on several IaaSs, called locations . AWS Openstack Docker you can even Bring Your Own Node "},{"title":"Supported platforms","baseurl":"","url":"/documentation/1.4.0/admin_guide/supported_platforms.html","date":null,"categories":[],"body":"Client Alien supports these different web browsers : Name Version Firefox 31 and higher Chrome 37 and higher Other browsers like Safari or the lasted IE version may work but are not automatically tested . Server Java virtual machine Alien 4 Cloud is written in java for the backend and requires a JVM 8 or higher (Oracle or OpenJDK). Orchestrators and deployment artefacts Cloudify 3.4 is alien 1.4.0 primary supported orchestrator. For pure docker users we also support Marathon as an orchestrator but due to marathon design it is not possible to support execution of classical TOSCA workflows on top of it. Orchestrators Deployment artefacts Cloudify 3 .bat ( alien.artifacts.BatchScript ), .sh ( tosca.artifacts.ShellScript ), Ansible playbooks, Docker images (via (Kubernetes)[#/documentation/1.4.0/orchestrators/cloudify3_driver/kubernetes.html] support since 1.3.1) Marathon Docker images Some Alien users deployed also Puppet artifact through scripts. "},{"title":"Topologies","baseurl":"","url":"/documentation/1.4.0/concepts/topologies.html","date":null,"categories":[],"body":"A topology (or TOSCA’s service template or blueprint) describe a deployment or a subset of a deployment. A topology is a composition of multiple nodes that may be connected through relationships. TOSCA normative types provide several types of relationships that are used to model the different component interactions that exists "},{"title":"Topology editor","baseurl":"","url":"/documentation/1.4.0/user_guide/topology_editor.html","date":null,"categories":[],"body":"Alien 4 cloud provides a TOSCA topology editor that allows to define your topologies in an easy way. The editor aims to provide very simple drag and drop edition for the most complex topologies and is accessed from two different places: From the topology catalog to edit a reusable template (topology templates from the catalog can be used to create new applications from a template or to provide complex reusable building blocks through topology substitution). From the application to edit the associated topology. "},{"title":"Dependencies","baseurl":"","url":"/documentation/1.4.0/user_guide/topology_editor_dependencies.html","date":null,"categories":[],"body":" Using imports keyword, importing an archive into a TOSCA definition allows usage of all types defined in the archive as if they were defined in the current document. These archives are called dependencies . An imported archive can itself have some imports declared: these will be the transitive dependencies . Alien 4 Cloud’s editor supports manual definition and simple auto-resolving of dependencies within a topology. Adding a type to a topology When you drag and drop a component from the catalog into the editor canvas, Alien4Cloud automatically adds the component’s archive into the topology’s dependency set. When several versions of the same dependency archive are available in the catalog, you can choose between versions by clicking on the button below the archive’s name. The same behavior applies when defining relationships between nodes, as shown below. Conflicts auto-resolving It is not possible to use multiple versions of an archive in a topology. To prevent conflicts, when adding a node template (resp. a relationship) from an archive that is already used in a different version in the topology, Alien4Cloud will automatically resolve to importing the latest version of the archive between those two. This behavior also applies recursively to transitive dependencies. Note that the auto-resolving may cause transitive dependency conflicts, as detailed below. The dependencies panel To display a table of a topology’s dependencies, unfold the dependencies panel from the right vertical bar. Each archive used in the topology as a dependency is shown as well as their versions. Manually changing a dependency version You can change an archive version by clicking the change version button. Alien4Cloud will automatically launch the necessary recovery operations. If there are missing types in the new version that could affect the topology, then the change is not acknowledged and an error is raised. If needed, transitive dependencies may also be updated to match the newer version. Transitive dependency conflicts Transitive dependency conflicts occur when two or more direct dependencies of the topology depend on the same transitive dependency, but with different versions. If so, conflicts are listed in the dependency panel. The topology should theoretically be deployable, but types compatibility is not guaranteed. You may resolve conflicts by manually changing dependency versions. In the example above, the topology is composed of two node templates: a tosca.nodes.Compute from the archive tosca-normative-types:1.0.0-ALIEN12 an alien.nodes.JDK from the archive jdk-type:1.0.0-SNAPSHOT . The dependencies of the topology are therefore tosca-normative-types:1.0.0-ALIEN12 and jdk-type:1.0.0-SNAPSHOT . However, the jdk-type:1.0.0-SNAPSHOT depends also on tosca-normative-types , but in version 1.0.0-SNAPSHOT . This causes a conflict, which is resolved by using the 1.0.0-ALIEN12 version. "},{"title":"Archive content","baseurl":"","url":"/documentation/1.4.0/user_guide/topology_editor_file.html","date":null,"categories":[],"body":" On the editor view, the allows you to browse and edit the content of the current edited topology archive. Upload new file The architect has the possible to upload or remove a file to your topology archive. This file can for example be a new script for your components or a new artifact. Click on to select and upload into the archive a file from your local filesystem. You can also create an empty file, to be edited later on. First, enter the name of the file to create in the edition box, then hit . Note that if entering file name like the foo/bar.txt , a new folder ‘foo’ with a new file ‘bar.txt’ will be created. Update the YAML of your topology Alien4cloud provides a view to see the YAML of your topology exists, useful to export the YAML. Since 1.3.0 , you can directly edit your YAML to, for example, override an existing relationships or create a new capability type. Don’t forget to save your change by clicking on the save button. Error handling The validation of the YAML is done on saving . If errors occurs, you will be displayed a popup where you can se what exactly when wrong. In addition, you can see errors annotations on the left of the line where the error occur. A mouser over the annotation will display the error in question. Limitations The content of the yaml file is currently not automatically updated. If you work with a milestone version of 1.4.0 you must save all pending operations in order to have a generated yaml file up to date with the changes you may have done in the editor. In order to edit the YAML of the topology all previous operations must have been saved. Shortcuts While you are working in the context of the file editor shortcuts like save, undo and redo are not applied to the alien 4 cloud editor but to the file editor (meaning you won’t save pending operations but only changes to the file currently under edition). When the focus leaves the editor panel then shortcuts are applied to alien 4 cloud topology editor and to pending operations. "},{"title":"Git integration","baseurl":"","url":"/documentation/1.4.0/user_guide/topology_editor_git_integration.html","date":null,"categories":[],"body":"When you create a new topology or application, Alien4Cloud also create a related local git repository. It allows users to benefit from the git history for every modifications that occurs on the topology and it also enable users to push to or to pull from a remote repository. Define your remote git repository You will have to configure the remote git URL before being able to push or pull. You can push or pull only if you have saved the modifications on the topology. Push to a remote repository You can decide on which branch to push to. If non is defined, it will push to the master branch. In order not to store the credentials inside Alien4Cloud, it is requested when you want to perform the action to push. Alien4Cloud don't support conflicts resolution right now When having a conflict, Alien4Cloud will push the commits into a new branch ( alien-conflicts-* ) and re-branch to the current. We will let you merge the changes into your choosen branch using your prefered tool. Pull from a remote repository You can decide on which branch to pull to. If non is defined, it will pull to the master branch. In order not to store the credentials inside Alien4Cloud, it is requested when you want to perform the action to pull. Alien4Cloud don't support conflicts resolution right now When having a conflict, you will have to merge using your prefered tool before continuing the edition. "},{"title":"Contextual/Global variables","baseurl":"","url":"/documentation/1.4.0/user_guide/topology_editor_global_variables.html","date":null,"categories":[],"body":"Alien 4 cloud provide a very convenient system that allows to reference contextual (application/location) variables in a topology through the usage of TOSCA input functions. This section details how you can reference this variables in the topology and how they work. Advanced inputs This section describe how you can use internal variables defined in location or application . Those parameters could be used as inputs for node template’s or capabilities’ properties. Our target for this feature is to allow internal prefixes to target meta-properties over different elements : Targeted element Internal prefix Description location loc_meta_ Targets meta-properties defined on a location application app_meta_ , app_tags_ Targets meta-properties or tags defined on an application Define a property as an internal input When you define a topology, you may want to define some node properties as inputs. An input is by default a value required to the user in order complete the topology and deploy. You can define any property as input and then set its value in the deployment page or indicate that the input is bound to an internal variable defined on a location or the application for example.The name syntax of an internal input is: <INTERNAL_PREFIX><TARGET_PROPERTY> where TARGET_PROPERTY can be a tag’s or a meta property’s name. For example, let’s say that we want to use one of the meta properties defined on our application : target_client First, we set the wanted property as an input. This will leads to the creation of a new input named after the property’s name. Then we have to rename the created input following the above syntax, and using the application meta prefix: app_meta_ target_client If you have some tags or meta-properties defined on your location, same syntax : loc_meta_ MYAPP_META1 loc_meta_ MYAPP_META2 app_tags_ MYAPP_TAG1 app_tags_ MYAPP_TAG2 Meta property naming Note : avoid dot . character in you meta-property name (e.g. my.meta.1) Missing values We have two possible cases regarding an input and the targeted meta-property : requirted property : if the provided value doesn’t exist as input the property will stay marked as missing and the topology not deployable optional property : if the provided value doesn’t exist you will have a warning but the deployment will still be possible In fact, the deployment steps will help you handle warnings and tasks for a good deployment setup. "},{"title":"Edition history","baseurl":"","url":"/documentation/1.4.0/user_guide/topology_editor_history.html","date":null,"categories":[],"body":"Pending operations history The sub-menu history provide a view to see the two histories. The default history is the list of current operation on the topology with : the author the operation name Select an operation to see it’s details. Git history Every archive under edition in alien4cloud is managed using the git version control system. This enabled git history feature out of the box in addition to the current operations history. Every time a topology is saved in alien4cloud a commit is performed on the local git history with an auto-generated commit message that contains the summary of operations applied while saving. date of commit name of the user who save the topology archive email of the user who save the topology archive message with all operations applied during save and their author (which may be or not the same as the commit author) "},{"title":"Overview","baseurl":"","url":"/documentation/1.4.0/user_guide/topology_editor_overview.html","date":null,"categories":[],"body":" Editor operations, save, undo, redo When performing changes in the editor you actually perform an ‘Operation’, operations basically relates to any change you make on the topology from adding a node, changing the value of a property, adding inputs to adding files in the catalog. After performing operations on the editor the edition menu will allow you to save , undo or redo the operations. You can also use usual shortcuts to perform these operations ( Ctrl+S , Ctrl+Z , Ctrl+Y on PCs and Cmd+S , Cmd+Z , Cmd+Y on MacOS). Saving operations When you save the topology, all pending operations will be saved, the content of the archive will be updated (yaml or pending file change operations) and all saved operations will be removed from the undo/redo queue meaning that you won’t be able to undo them. If some operations have been undone before saving they will still exist in the operation queue and you will be able to redo them. Leaving the editor with pending operations When you try to leave the editor, if there are pending (unsaved) operations, a popup will notify it to you, leaving you with two choices: Save : Will trigger a save action on all pending operations. Skip : Will not trigger the save action. Note that this WILL NOT CLEAR the pending operations. Therefore, you will still have them when comming back on the editor, as long as your edit session hasn’t expired. Shortcuts Save , undo and redo are the first shortcuts that we have introduced in the alien 4 cloud editor. We will add some more shortcuts in the future versions, join us on slack and ask for your favorites shortcuts or even better contribute to the project and make things happen! To get the list of all shortcuts available press the ? key. "},{"title":"Portability insights","baseurl":"","url":"/documentation/1.4.0/user_guide/topology_editor_portability.html","date":null,"categories":[],"body":" Premium feature Portability insight is a Premium feature. Alien4Cloud holds a catalog of components, which are added by uploading archives (CSARS). Also, orchestrators plugins can provide the catalog with specific components they need to run, thus, rendering them accessible when browsing or editing a topology. Using them in an abstract topology is a bad practice as they will lower the portability of the topology (only deployable on the location (or with the orchestrator) which provided them). The purposes of the portability concept in this application are: Allowing the component’s designer to specify the portability level of his components Informing users about the specificity of those components, and giving them the portability level of the topology they are creating. Components portability There are few information describing the portability of a component. We should call them Portability indicators or simply indicators from now on. Portability indicators The indicators depends on the type of the component we want to describe. Applicable for all components IAASS : List of IaaSs this component is linked to, meaning you can not deploy it on any other one. ORCHESTRATORS : List of orchestrators the component is linked to, meaning you can only deploy it using this orchestrator. For example, you can add facets in the components view to show all components of a specific IAASS, like the following screen. And when you click on the icon of a component you can see more portability informations. Applicable only for a Compute The followings indicators can only be defined on a Compute node: SUPPORTED_ARTIFACTS : List of artifacts types that the compute supports, such as sh, bat, etc... . Therefore, you might nor be able to host on it, a component which implementation scripts’ artifact type is not one of these indicator’s values. INSTALLED_PACKAGES : List of the packages that are pre-installed on the compute. This might serve when hosting a component which has a requirement on a specific package like apt, yum, etc... . Note that this particular indicator do not make any sense if provided for the basic Tosca Compute, as it is an abstract type. It would makes more sense on an implementation of that node (a template), for example, on a On demand resource of type Compute bound to a specific image. Applicable for others components (except the Compute): REQUIRED_PACKAGES : List of packages that the component needs to be deployed and run correctly. This will be match with the INSTALLED_PACKAGES indicator or the Compute component on whit the component will be linked. YAML providing Alien4Cloud allows the component and/or the orchestrator plugin designer to provides values for these indicators in the definition of the component type in the yaml file, using the above keys. For example: alien.example.Node : derived_from : tosca.nodes.SoftwareComponent properties : [ ... ] capabilities : [ ... ] portability : ORCHESTRATORS : [ Mock ] IAASS : [ OpenStack ] When working with plugins orchestrators, some indicators values are such as IAASS and ORCHESTRATORS are automatically registered, and merged (if defined in the yaml) with the existing one. UI edition We also allow administrator to edit portability indicators on the user interface. This is only valid for orchestrator plugin’s components, and it is done when creating and configuring on demand location resources. Topology portability The topology portability level is defined by combining the portability level of all its components. In the topology edition view, every component in the catalog has a portability information. So, the architect is be able to see progressively the impact of what he chose to add on the designed topology. To see this impact, go to portability view of the topology. The faster way to see the compatible locations with your current topology is the location support view. "},{"title":"Topology recovery","baseurl":"","url":"/documentation/1.4.0/user_guide/topology_editor_recovery.html","date":null,"categories":[],"body":" Problem A topology is designed with nodes and relationships, based on some imported archives. Thus, it depends on these archives specific versions present at a moment in the catalog. When working with snapshots archives , it might happen that these are updated (as we allow uploading and updating snapshots in Alien4cloud) while being used as dependency, eventually leading to a non consistent topology depending on what has changed within it. For example, let’s assume we have a topology using a node type coming from a snapshot archive A . Now for some reason, we want to update, or even remove that type from the archive A . What will become of the topology? The purpose of this feature is to try to recover the topology and have a consistent one. Recovery choices After the update of the used snapshot, if you try to edit the topology, you’ll be displayed the list of dependencies that have been updated. You’ll not be able to edit the topology if you do not perform one of the reconciliation action. The choices are: recover or reset the topology. Recover the topology choosing this, alien4cloud will try to recover the topology and make it consistent with what is currently in the repository. By analyzing the topology and matching it against the updated archive, the following can be decided: Delete a node template : If the type related to the Node template has been deleted from the archive Delete a relationship template : If the related type has been deleted from the archive, or if, the related requirement / capability has been removed from the source / target node type. Rebuild a node / relationship template : If the related type has somehow change. Reset the topology This option will delete everything within the topology, leaving it completely empty. No rollback possible Beware that these actions will automatically save the topology after being executed, then there is no way back with undo/redo mechanism. Limitations Following are modifications that are not yet processed on topology recovery, along with some illustrations. Capability type Changing the type of a capability that is already a relationship’s target, will not leads to the validation / rebuilding of the related relationships. Therefore after recovering, you might end up with a relationship with an invalid targeted capability. ## original archive node_types : alien.test.nodes.TestComponent : capabilities : capa_to_be_changed : type : alien.test.capabilities.CapaToBeChanged ## Updated archive node_types : alien.test.nodes.TestComponent : capabilities : capa_to_be_changed : type : alien.test.capabilities.CapaToBeChanged2 #updated capability type ## Topology template node_templates : TestComponentSource : type : alien.test.nodes.TestComponentSource requirements : - capa_to_be_changed : ## This relationship will not be rebuilt or validated against the new targeted capability type node : TestComponent capability : alien.test.capabilities.CapaToBeChanged "},{"title":"Substitution","baseurl":"","url":"/documentation/1.4.0/user_guide/topology_editor_substitution.html","date":null,"categories":[],"body":" Topology substitution is a very powerful TOSCA concept that allows you to consider a full or partial topology as a single reusable type. For example you can model a very complex Hadoop cluster as a single reusable node thanks to substitution, so that other people can use a fine grained and tuned Hadoop installation without even knowing of all the complexity of the underlining system. Topology substitution / composition A topology template can be used in another template as a type. Topology substitution can make existing topology template re-useable. In order to do this, you must: Create a type that is inherited from your topology template. For example, you have a topology template of an Apache server hosted on a compute as shown in this view. If you want to use this template as a type, you need to click subsitution panel, which is over the bottom-right corner in topology composer view. Choose the capabilities/requirements you want to expose. After clicking Subsitutions panel, you can type tosca.nodes.Root in search bar in the panel. It will create empty Capabilities and Requirements fields. Then, you can select the components whose capabilities/requirements you want to expose. By clicking the Expose button next to capabilities/requirements element of the selected component, you can expose these capabilities/requirements, which will become the capabilities and requirements of the composed new type. Eventually define inputs and outputs that will become respectively properties and attributes for the created type. The inputs of your topology template will become properties of the composed type, and what you choose as outputs will be attributes of the new type. The created type is named like the template and is usable in another template or an application topology. It’s content will be wired at runtime stage. "},{"title":"Workflows","baseurl":"","url":"/documentation/1.4.0/user_guide/topology_editor_workflows.html","date":null,"categories":[],"body":" A workflow defines sequences of steps that act upon topology’s nodes in order to achieve a defined goal. Each topology embed several workflows: standard workflow ( install and uninstall ) : when you are designing a topology, a4c maintains the two standard workflows (install and uninstall) following the TOSCA normative lifecycle. You can customize them in order to change the way steps are orchestrated. custom workflows: you can create as many custom workflows as you want. We can also talk about deduced workflows: these workflows are deduced from standard workflows. For example, when you scale up a host, the host node installation sub-workflow is deduced from install workflow (by isolating all steps concerning this particular host and ignoring all other hosts and links from/to steps outside this host). Workflow steps So a workflow is a set of steps that are eventually linked. Actually it’s an oriented graph. A step can have predecessors and successors. Rules are : If A is followed by B, then A will be executed before B. if A is followed by B and C, then B and C will be executed in parallel after A (fork). if C is preceded by A and B, then C will be executed only after A and B are terminated (join). if a step has no predecessors, it will be linked from the workflow entry point (start). if a step has no successors, it will be linked to the workfow end point (end). Workflow activity A step is associated to an activity. Currently, an activity can be: set state activity: this activity is used to change the state of a node. call operation activity: used to call an operation on a node interface. delegate workflow activity: this is used by a4c to specify that a particular node lifecycle managment should be handled by the orchestrator (consider it as a black box). Delegate activity and abstract nodes When you add an abstract node to a topology, a4c will add a delegate workflow activity, until you replace the node by a concrete implementation. If the node is not replaced before the deployment, it must be substituted at deployment matching stage. The lifecycle for this node will then be managed by the orchestrator. Node relationship operations When you add a node in a topology, a4c adds all the necessary steps in the standard workflows : all the operations of tosca.interfaces.node.lifecycle.Standard interface are added in the correct order. When you add a relation between two nodes, the steps concerning those two nodes are organized regarding the standard lifecycle rules described in TOSCA. For the moment, operation related to the relation are not added as steps in the workflow: they are implicit (actually, the cloudify orchestrator manages such operations at a lower level): operations pre_configure_source , post_configure_source and pre_configure_target , post_configure_target are launched around configure operation (for respectively source and target). operations add_target and add_source are launched after the start operation. operation remove_target is launched after the stop operation. Importance of state change activity As we can see in the image below, each operation call is surrounded with state changes. Here, the create operation is preceded by a state change to ‘ creating ’ and followed by a state change to ‘ created ’. This is defined by TOSCA in the standard lifecycle. This is very important to surround each standard interface operation call by these state changes and even add these state changes even if the operation is not added in the workflow. These state changes are mainly used as bound when some relationship are added in the topology. Workflow edition Editing a complex workflow can become a mess if you have a lot of nodes and relationships in your topology. We have tried to build a intuitive editor to help you to customize your workflows. Basic usage rules are: the first time you click on a step, it’s spinned (and becomes blue). The spinned step is the one on whitch you will be able to make some actions. All possible actions on the spinned step are listed in the panel at the right of the screen. In the image above, we have selected the step named ‘create’ (apache create operation) and we are about to insert an operation call. when a step is spinned, when you click again on other steps, you will add them to the selection (yellow background). Then you will be able to make actions between the spinned step and the selected steps. In the image below, we are adding a link between the apache.create and the php.create When you edit a workflow, some validation are checked and some errors can be raised: cycle are avoided. state changes are not allowed in parallel for a given node. state changes must follow a defined order (typically started can not be set before created). In the image above, a cycle is detected and an error is raised. Some actions are not allowed: you can not remove/add state change activity steps in standard workflows. you can not remove delegate activity steps in standard workflows. you can not add any activity for an abstract node. Workflow limitations Topology composition Custom workflows are not compatible with topology composition: when you use topology composition (when you add a node of a type that is a result of a template exposition), all your customizations will be lost (the standard workflow will be regenerated). "},{"title":"Topology template","baseurl":"","url":"/documentation/1.4.0/devops_guide/tosca_grammar/topology_template.html","date":null,"categories":[],"body":"This section defines the topology template of a cloud application. The main ingredients of the Topology Template are node templates representing components of the application. Keynames A Topology Template contains the following element keynames: Keyname Required Type Description description no string Declares a description for this Service Template and its contents. inputs no Defines a set of global input parameters passed to the template when its instantiated. This provides a means for template authors to provide points of variability to users of the template in order to customize each instance within certain constraints. input_artifacts no Define artifacts as inputs. substitution_mappings no Describe how this topology can be used as a type in another one. node_templates yes Defines a list of Node template s that model the components of an application or service’s topology within the Service Template. relationship_templates no Defines a list of Relationship Templates that are used to model the relationships (e.g., dependencies, links, etc.) between components (i.e., Node Templates) of an application or service’s topology within the Service Template. outputs no This optional section allows for defining a set of output parameters provided to users of the template. For example, this can be used for exposing the URL for logging into a web application that has been set up during the instantiation of a template. groups no This is an optional section that contains grouping definition for node templates. Grammar The overall structure of a TOSCA Topology Template and its top-level key collations using the TOSCA Simple Profile is shown below: topology_template description : a description of the topology template inputs : # list of global input parameters input_artifacts : # map of artifacts defined as inputs (non TOSCA) substitution_mappings : # define substitution mapping node_templates : # list of node templates relationship_templates : # list of relationship templates groups : # list of groups defined in service template outputs : # list of output parameters description This optional element provides a means to include single or multiline descriptions within a TOSCA Simple Profile template as a scalar string value. inputs This optional element provides a means to define parameters, their allowed values via constraints and default values within a TOSCA Simple Profile template. This section defines template-level input parameter section. Inputs here would ideally be mapped to BoundaryDefinitions in TOSCA v1.0. Treat input parameters as fixed global variables (not settable within template) If not in input take default (nodes use default) Grammar inputs : <property_definition_1> ... <property_definition_n> Examples Simple example without any constraints: inputs : fooName : type : string description : Simple string typed property definition with no constraints. default : bar Example with constraints: inputs : SiteName : type : string description : string typed property definition with constraints default : My Site constraints : - min_length : 9 The parameters (properties) that are listed as part of the inputs block could be mapped to PropertyMappings provided as part of BoundaryDefinitions as described by the TOSCA v1.0 specification. input_artifacts This section defines template-level input artifacts. Such artifacts can be shared by several nodes. Their content is defined at deployment time. The section input_artifacts and the function get_input_artifact are not yet defined in TOSCA. Examples In this example, an input artifact is defined and shared by two different nodes: topology_template : input_artifacts : my_war_file : type : alien.artifacts.WarFile node_templates : War1 : type : alien.nodes.cloudify.War artifacts : war_file : implementation : { get_input_artifact : my_war_file } type : alien.artifacts.WarFile War2 : type : alien.nodes.cloudify.War artifacts : war_file : implementation : { get_input_artifact : my_war_file } type : alien.artifacts.WarFile substitution_mappings Substitution allows you to compose topologies by combining templates. To be usable in another topology, you must define what a topology template will expose: capabilities requirements properties attributes Examples In the example below, the topology template define 2 nodes. It’s exposed as a tosca.nodes.Root (this means that the created type will inherit tosca.nodes.Root ). It will have: a capability named ‘host’ that will be wired to the capability ‘host’ of the node ‘Mysql’. a requirement named ‘network’ that will be wired to the requirement ‘network’ of the node ‘Compute’ Since inputs and outputs are defined for this template, it will also have: a property named ‘db_port’ an attribute named ‘Mysql_database_endpoint_port’ topology_template : inputs : db_port : type : integer required : true default : 3306 description : The port on which the underlying database service will listen to data. substitution_mappings : node_type : tosca.nodes.Root capabilities : host : [ Mysql , host ] requirements : network : [ Compute , network ] node_templates : Mysql : type : alien.nodes.Mysql properties : db_port : { get_input : db_port } requirements : - host : node : Compute capability : tosca.capabilities.Container relationship : tosca.relationships.HostedOn Compute : type : tosca.nodes.Compute outputs : Mysql_database_endpoint_port : value : { get_property : [ Mysql , database_endpoint , port ] } node_templates This element lists the Node Templates that describe the (software) components that are used to compose cloud applications. If a node template name contains some special character (is: not an alphanumeric character from the basic Latin alphabet and the underscore) we will replace this caractere by an underscore. Grammar node_templates : <node_template_defn_1> ... <node_template_defn_n> Example node_templates : my_webapp_node_template : type : WebApplication my_database_node_template : type : Database The node templates listed as part of the node_templates block can be mapped to the list of NodeTemplate definitions provided as part of TopologyTemplate of a ServiceTemplate as described by the TOSCA v1.0 specification. see: Node template relationship_templates Not yet supported In Alien 4 Cloud groups The group construct is a composition element used to group one or more node templates within a TOSCA Service Template. It is mainly used to apply a Policy onto a group of nodes. Grammar groups : <group_name_A> : members : [ node1 , ... nodeN ] policies : - <policy_defn_A_1> ... - <policy_defn_A_n> <group_name_B> members : [ node1 , ... nodeN ] policies : - <policy_defn_B_1> ... - <policy_defn_B_n> Example node_templates : server1 : type : tosca.nodes.Compute # more details ... server2 : type : tosca.nodes.Compute # more details ... server3 : type : tosca.nodes.Compute # more details ... groups : server_group_1 : members : [ server1 , server2 ] policies : - name : my_scaling_ha_policy type : tosca.policy.ha see: Policy outputs This optional element provides a means to define the output parameters that are available from a TOSCA Simple Profile service template. Grammar outputs : <property_definitions> Example outputs : server_ip : description : The IP address of the provisioned server. value : { get_attribute : [ my_server , ip_address ] } "},{"title":"Edition validation","baseurl":"","url":"/documentation/1.4.0/user_guide/topology_validation.html","date":null,"categories":[],"body":"Topology validation In the topology validation view you can see the validation of the current topology and the last topology saved. The current validation is the validation of the last topology saved with the pending operations. The validation of the last saved topology is important because it’s the topology used in the deployment. Be sure to save a valid topology if you want to deploy. "},{"title":"TOSCA","baseurl":"","url":"/documentation/1.4.0/concepts/tosca.html","date":null,"categories":[],"body":"TOSCA specification allows users to specify a cloud application’s topology by defining a set of nodes that are connected to other using relationships. The goal of the TOSCA specification is to focus on a good meta-definition of cloud applications and their components and focus on the following goals: Reusability of components Interoperability of TOSCA archive through the different TOSCA containers In order to manage reusability of components and defined recipes, TOSCA allows definition of NodeTypes that specify the available components and eventually their implementation (for example a Java NodeType and the script implementation to install it on a virtual server). The defined NodeTypes can then be reused when a developer or application architect want to define the topology of a cloud application. TOSCA Simple Profile in YAML TOSCA Simple profile in YAML allows definition of TOSCA elements in a YAML format rather than XML. The YAML format is simpler to write and offers some shorter ways to define a TOSCA definition. Note: TOSCA Simple profile is a working draft and is not released yet to the public. Current Alien 4 Cloud version is using Alien 4 Cloud’s specific DSL, which is really close to the latest TOSCA Simple Profile in YAML TC work. This may be subject to some updates in the future. TOSCA in Alien 4 Cloud In Alien 4 Cloud, TOSCA can be used to define both Types (catalog elements) and Applications topologies (Templates). Alien 4 Cloud tools, like the topology editor, allow you to create Application topologies that can be exported to Tosca Templates. Alien 4 Cloud supports a slightly modified version of TOSCA Simple Profile in YAML in order to add features that are specific to the Alien 4 Cloud context. However we are able to load pure TOSCA compliant templates and also export topologies as pure TOSCA templates. The export feature will be available in the next release. Cloud Service Archive Every elements in TOSCA must be contained into a Cloud Service Archive (CSAR). A Cloud Service Archive is a folder or a zip file that contains types and templates definitions and any other files required for elements implementations. ├── my-definition-file.yml ├── images │ ├── component-icon.png │ └── ... ├── scripts │ └── install.sh ├── lib │ └── tosca-normative-types.yml The entry point for the Cloud Service Archive are the definitions files placed at the root of the Archive. Basically this is any .yaml or .yml file that can be found at the Archive root. To create your own CSAR, please refer to this section . Alien 4 Cloud currently supports only a single service definitions file at the root level. This definition file can however reference other definitions files within the archive through the imports feature. "},{"title":"Writing custom types","baseurl":"","url":"/documentation/1.4.0/devops_guide/tosca_concepts_types_custom.html","date":null,"categories":[],"body":"TOSCA specification allows definition of a cloud application by defining a set of nodes that are connected to other using relationships. In order to improve reusability of components and defined recipes, TOSCA allow the definition of NodeTypes that defines components and eventually their implementation (for example a Java NodeType and the script implementation to install it on a virtual server). The defined NodeTypes can then be reused when a developer or application architect want to define the topology of a cloud application. TOSCA and thus Alien 4 Cloud allows you to define some abstract types (basically meta-types without implementation). This allows of course to dissociate the specific technical implementations from the actual definition of a component (writing an abstract Java node and several implementations with chef, puppet etc.). This can also be leveraged in order to meta-model your applications for the cloud even if you don’t need to deploy them right now. Alien 4 Cloud advisory features for moving to the cloud leverage this to quiclky map your Information System and get feedback on your application’s cloud maturity and migration advisory. The sub-sections details how you can write your own Capability Types, Node Types and Relationship Types to extends the one that Alien 4 Cloud already provides to you. Definition of Node Types and other elements in TOSCA should be done in a definition file and packaged in a Cloud Service Archive "},{"title":"Normative Types","baseurl":"","url":"/documentation/1.4.0/devops_guide/normative_types/tosca_concepts_types_normative.html","date":null,"categories":[],"body":"TOSCA Specification defines some basic root types (TOSCA Normative types). There is default types for the Infrastructure and for the appliction. Most of the application components however are not part of normative types but should extends from the TOSCA root types. This allows the container to leverage the default nodes lifecycle in order to automate Plan creation. If you add some custom nodes that doesn’t extends from the Normative types, the container will not be able to include them in an auto-generated plan, every application that uses such types will require a custom defined plan. Even if it is possible to do so this is not recommended. Normative Lifecycle TOSCA Normative types defines the root nodes and default lifecycle to ease writing and using TOSCA for real applications. The default lifecycle can be extended and improved through the creation of custom plans but should fit most usages. "},{"title":"Capabilities","baseurl":"","url":"/documentation/1.4.0/devops_guide/normative_types/tosca_concepts_types_normative_capabilities.html","date":null,"categories":[],"body":"Normatives capability types in TOSCA tosca.capabilities.Root This is the default (root) TOSCA Capability Type definition that all other TOSCA Capability Types derive from. Definition capability_types : tosca.capabilities.Root : description : > This is the default (root) TOSCA Capability Type definition that all other TOSCA Capability Types derive from. tosca.capabilities.Container The Container capability, when included on a Node Type or Template definition, indicates that the node can act as a container for (or a host for) one or more other declared Node Types. Properties Name Required Type Constraints Description valid_node_types true string[] A list of one or more names of Node Types that are supported as containees that declare the Container type as a Capability. Definition tosca.capabilities.Container : derived_from : tosca.capabilities.Root properties : valid_node_types : type : string[] required : true description : Array of node types that are valid node types to be contained. description : > A list of one or more names of Node Types that are supported as containees that declare the Container type as a Capability. tosca.capabilities.Endpoint This is the default TOSCA type that should be used or extended to define a network endpoint capability. Properties Name Required Type Constraints Description protocol yes string None The name of the protocol (i.e., the protocol prefix) that the endpoint accepts. Examples: http, https, tcp, udp, etc. port yes integer greater_or_equal:1 less_or_equal:65535 The port of the endpoint. secure no boolean default = false Indicates if the endpoint is a secure endpoint. Definition tosca.capabilities.Endpoint : derived_from : tosca.capabilities.Feature properties : protocol : type : string default : http port : type : integer constraints : - greater_or_equal : 1 - less_or_equal : 65535 secure : type : boolean default : false tosca.capabilities.DatabaseEndpoint This is the default TOSCA type that should be used or extended to define a specialized database endpoint capability. Definition tosca.capabilities.DatabaseEndpoint : derived_from : tosca.capabilities.Endpoint tosca.capabilities.Attachment This is the default TOSCA type that should be used or extended to define a block storage capability. Definition tosca.capabilities.Attachment : derived_from : tosca.capabilities.Root "},{"title":"Nodes","baseurl":"","url":"/documentation/1.4.0/devops_guide/normative_types/tosca_concepts_types_normative_nodes.html","date":null,"categories":[],"body":"The nodes on this page follow the exact TOSCA normative types except the added tags section that we use in ALIEN to specify additional tags on a components. One of them being a specific tag that we use to package the icon that will be used in the UI for a given component. Normatives node types in TOSCA tosca.nodes.Root This is the Root TOSCA Node Type that other nodes extends from. This allows to have a consistent set of features for modeling and management (e.g., consistent definitions for requirements, capabilities and lifecycle interfaces). All Node Type definitions SHOULD extends from the TOSCA Root Node Type. This allows your custom nodes to be included in the default lifecycle generation (based on the root node lifecycle interface). Interfaces The Root node uses the lifecycle interface. See more informations on normative types lifecycle. Definition node_types : tosca.nodes.Root : abstract : true description : > This is the default (root) TOSCA Node Type that all other TOSCA nodes should extends. This allows all TOSCA nodes to have a consistent set of features for modeling and management (e.g, consistent definitions for requirements, capabilities, and lifecycle interfaces). tags : icon : /images/root.png requirements : dependency : type : tosca.capabilities.Root occurrences : [ 0 , unbounded ] interfaces : lifecycle : description : Default lifecycle for nodes in TOSCA. operations : create : description : Basic lifecycle create operation. configure : description : Basic lifecycle configure operation. start : description : Basic lifecycle start operation. stop : description : Basic lifecycle stop operation. delete : description : Basic lifecycle delete operation. tosca.nodes.Compute Represents a real or virtual machine or ‘server’. Informations specified on the Compute node will be used to find the machine that fits the given requirements in the cloud available machines. If no sizing informations are specified the cloud’s provider default machine will be used. It is strongly recommended to specify the required cpus and memory at least. Properties Name Required Type Constraints Description num_cpus no integer >= 1 Number of (actual or virtual) CPUs associated with the Compute node. disk_size no integer >=0 Size of the loal disk, in Gigabytes (GB), available to applications running on the Compute node. mem_size no integer >=0 Size of memory, in Megabytes (MB), available to applications running on the Compute node. os_arch yes string none The host Operating System (OS) architecture. Example of valid values includes: x86_32, x86_64, etc. os_type yes string none The hots Operating System (OS) type. Example of valid values includes: linux, windows, aix, macos, etc. os_distribution no string none The host Operating System (OS) distribution. Example of valid values includes: debian, fedora, rhel, and ubuntu os_version no string none The host Operating System (OS) version. Attributes Name Required Type Description ip_address no string The primary IP address assigned by the cloud provider that applications may use to access the Compute node. Definition node_types : tosca.nodes.Compute : derived_from : tosca.nodes.Root description : > Represents a real or virtual machine or ‘server’. Informations specified on the Compute node will be used to find the machine that fits the given requirements in the cloud available machines. If no sizing informations are specified the cloud’s provider default machine will be used. It is strongly recommended to specify the required cpus and memory at least. properties : num_cpus : type : integer constraints : - greater_than : 0 description : Number of (actual or virtual) CPUs associated with the Compute node. mem_size : type : integer constraints : - greater_than : 0 description : Size of memory, in Megabytes (MB), available to applications running on the Compute node. disk_size : type : integer constraints : - greater_than : 0 description : Size of the local disk, in Gigabytes (GB), available to applications running on the Compute node. os_arch : type : string required : true constraints : - valid_values : [ \"x86_32\" , \"x86_64\" ] description : The host Operating System (OS) architecture. os_type : type : string required : true constraints : - valid_values : [ \"linux\" , \"aix\" , \"mac os\" , \"windows\" ] description : The host Operating System (OS) type. os_distribution : type : string description : The host Operating System (OS) distribution. os_version : type : string description : The host Operating System version. attributes : ip_address : type : string description : > The primary IP address assigned by the cloud provider that applications may use to access the Compute node. Note: This is used by the platform provider to convey the primary address used to access the compute node. Future working drafts will address implementations that support floating or multiple IP addresses. capabilities : host : type : tosca.capabilities.Container properties : valid_node_types : [ tosca.nodes.SoftwareComponent ] tosca.nodes.BlockStorage The TOSCA BlockStorage node currently represents a server-local block storage device (i.e., not shared) offering evenly sized blocks of data from which raw storage volumes can be created. Properties Name Required Type Constraints Description size no string None The requested storage size in MegaBytes (MB). volume_id no integer >0 ID of an existing volume (that is in the accessible scope of the requesting application). snapshot_id no integer >0 Some identifier that represents an existing snapshot that should be used when creating the block storage (volume). Attributes Name Required Type Constraints Description volume_id no integer >0 ID provided by the orchestrator for newly created volumes. Definition node_types : tosca.nodes.BlockStorage : derived_from : tosca.nodes.Root description : > The TOSCA BlockStorage node currently represents a server-local block storage device (i.e., not shared) offering evenly sized blocks of data from which raw storage volumes can be created. tags : icon : /images/volume.png properties : size : type : integer constraints : - greater_than : 0 description : The requested storage size in MegaBytes (MB). volume_id : type : string description : ID of an existing volume (that is in the accessible scope of the requesting application). snapshot_id : type : string description : Some identifier that represents an existing snapshot that should be used when creating the block storage (volume). attributes : volume_id : type : string description : ID provided by the orchestrator for newly created volumes. requirements : attachment : type : tosca.capabilities.Attachment tosca.nodes.ObjectStorage The TOSCA ObjectStorage node represents storage that provides the ability to store data as objects (or BLOBs of data) without consideration for the underlying filesystem or devices. Properties Name Required Type Constraints Description store_name yes string None The logical name of the object store (or container). store_size no integer >=0 The requested initial storage size in Gigabytes (GB). store_maxsize no integer >=0 The requested maximum storage size in Gigabytes (GB). Definition node_types : tosca.nodes.ObjectStorage : abstract : true derived_from : tosca.nodes.Root description : > The TOSCA ObjectStorage node represents storage that provides the ability to store data as objects (or BLOBs of data) without consideration for the underlying filesystem or devices. tags : icon : /images/objectstore.png properties : store_name : type : string required : true description : The logical name of the object store (or container). store_size : type : integer constraints : - greater_or_equal : 0 description : The requested initial storage size in Gigabytes. store_maxsize : type : integer constraints : - greater_than : 0 description : The requested maximum storage size in Gigabytes. tosca.nodes.SoftwareComponent The TOSCA SoftwareComponent node represents a generic software component that can be managed and run by a TOSCA Compute Node Type. Properties Name Required Type Constraints Description version no version None The software component’s version. Definition node_types : tosca.nodes.SoftwareComponent : abstract : true derived_from : tosca.nodes.Root description : > The TOSCA SoftwareComponent Node Type represents a generic software component that can be managed and run by a TOSCA Compute Node Type. requirements : host : type : tosca.nodes.Compute relationship_type : tosca.relationships.HostedOn tags : icon : /images/software.png tosca.nodes.WebServer The TOSCA WebServer Node Type represents an abstract software component or service that is capable of hosting and providing management operations for one or more WebApplication nodes. Definition node_types : tosca.nodes.WebServer : abstract : true derived_from : tosca.nodes.SoftwareComponent description : > The TOSCA WebServer Node Type represents an abstract software component or service that is capable of hosting and providing management operations for one or more WebApplication nodes capabilities : http_endpoint : type : tosca.capabilities.Endpoint https_endpoint : type : tosca.capabilities.Endpoint host : type : tosca.capabilities.Container properties : valid_node_types : [ tosca.nodes.WebApplication ] tosca.nodes.WebApplication The TOSCA WebApplication node represents a software application that can be managed and run by a TOSCA WebServer node. Specific types of web applications such as Java, etc. could be derived from this type. Definition node_types : tosca.nodes.WebApplication : derived_from : tosca.nodes.Root description : > The TOSCA WebApplication node represents a software application that can be managed and run by a TOSCA WebServer node. Specific types of web applications such as Java, etc. could be derived from this type. requirements : host : type : tosca.nodes.WebServer relationship_type : tosca.relationships.HostedOn tosca.nodes.DBMS The TOSCA DBMS node represents a typical relational, SQL Database Management System software component or service. Properties Name Required Type Constraints Description dbms_port yes integer None The port the DBMS service will listen to for data and requests. dbms_root_password no string None The user account used for the DBMS administration. Definition node_types : tosca.nodes.DBMS : abstract : true derived_from : tosca.nodes.SoftwareComponent description : > The TOSCA DBMS node represents a typical relational, SQL Database Management System software component or service. tags : icon : /images/relational_db.png properties : dbms_root_password : type : string description : the root password for the DBMS service. dbms_port : type : integer description : the port the DBMS service will listen to for data and requests capabilities : host : type : tosca.capabilities.Container properties : valid_node_types : [ tosca.nodes.Database ] tosca.nodes.Database Base type for the schema and content associated with a DBMS. The TOSCA Database node represents a logical database that can be managed and hosted by a TOSCA DBMS node. Properties Name Required Type Constraints Description db_user yes string None The special user account used for database administration. db_password yes string None The password associated with the user account provided in the ‘db_user’ property. db_port yes integer None The port the database service will use to listen for incoming data and requests. db_name yes string None The logical database name. Definition node_types : tosca.nodes.Database : derived_from : tosca.nodes.Root description : > Base type for the schema and content associated with a DBMS. The TOSCA Database node represents a logical database that can be managed and hosted by a TOSCA DBMS node. tags : icon : /images/relational_db.png properties : db_user : type : string required : true description : The special user account used for database administration. db_password : type : string required : true description : The password associated with the user account provided in the ‘db_user’ property. db_name : type : string required : true description : The logical name of the database. "},{"title":"Relationships","baseurl":"","url":"/documentation/1.4.0/devops_guide/normative_types/tosca_concepts_types_normative_relationships.html","date":null,"categories":[],"body":"Normatives relationship types in TOSCA tosca.relationships.Root This is the default (root) TOSCA Relationship Type definition that all other TOSCA Relationship Types derive from. Definition tosca.relationships.Root : # The TOSCA root relationship type has no property mappings interfaces : tosca.interfaces.relationship.Configure : documentation : > Default lifecycle for nodes in TOSCA. operations : pre_configure_source : documentation : Operation to pre-configure the source endpoint. pre_configure_target : documentation : Operation to pre-configure the target endpoint. post_configure_source : documentation : Operation to post-configure the source endpoint. post_configure_target : documentation : Operation to post-configure the target endpoint. add_target : documentation : Operation to add a target node. remove_target : documentation : Operation to remove a target node. tosca.relationships.DependsOn This type represents a general dependency relationship between two nodes. Depends on impacts the TOSCA default lifecycle. A node that depends from a target node will be started after the target node has been actually started. Definition tosca.relationships.DependsOn : derived_from : tosca.relationships.Root valid_target_types : [ tosca.capabilities.Root ] tosca.relationships.HostedOn This type represents a hosting relationship between two nodes. Definition tosca.relationships.HostedOn : derived_from : tosca.relationships.DependsOn valid_target_types : [ tosca.capabilities.Container ] tosca.relationships.ConnectsTo This type represents a network connection relationship between two nodes. Definition tosca.relationships.ConnectsTo : derived_from : tosca.relationships.DependsOn valid_target_types : [ tosca.capabilities.Endpoint ] tosca.relationships.AttachTo This type represents an attachment relationship between two nodes. For example, an AttachTo relationship type would be used for attaching a storage node to a Compute node. Definition tosca.relationships.AttachTo : derived_from : tosca.relationships.Root valid_target_types : [ tosca.capabilities.Attachement ] properties : location : type : string constraints : min_length : 1 device : type : string tosca.relationships.RoutesTo This type represents an intentional network routing between two Endpoints in different networks. Definition tosca.relationships.RoutesTo : derived_from : tosca.relationships.ConnectsTo valid_target_types : [ tosca.capabilities.Endpoint ] "},{"title":"Workflows","baseurl":"","url":"/documentation/1.4.0/devops_guide/tosca_concepts_workflows.html","date":null,"categories":[],"body":"TOSCA Specification defines the notion of Plans . Plans are basically workflows that the tosca container will be able to leverage to administrate the defined tosca application. The specification defines two basic workflow (plans): build : Used to instanciate and start a topology. terminate : Used to tear down a topology. In order to ease TOSCA usage the normative types specification include default lifecycle operations on node types and relationship types that can be used in order to automatically generate workflows (plans). This is why most of users won’t have to define plans. Workflow definition Workflow definition is inspired by BPMN2 but focus on required events, gateways and activities for TOSCA. The following section defines the available elements and the way to define them in a TOSCA Simple profile in YAML. Definition of elements is also adapted to match the TOSCA Simple profile in YAML concepts. Events Start event Every plan should start with the start event, if omitted the container will automatically add it as first element of the workflow. Graphical representation The following symbol represents the start event. Grammar workflows : <flow_id> : <id> : startEvent End event Every plan should finish with the end event, if omitted the container will automatically add it as last element of the workflow. Graphical representation The following symbol represents the end event. Grammar workflows : <flow_id> : <id> : endEvent Update State send message event Update the state of a node template or relationship template. Graphical representation The following symbol represents the end event. target: state Grammar workflows : <flow_id> : # Simple notation <id> : stateUpdate:<target>#<state> # Detailed notation <id> : stateUpdate : target : <target> state : <state> Update State receive message event Receive a state update to trigger next operation. Graphical representation The following symbol represents the end event. target: state Grammar workflows : <flow_id> : # Simple notation <id> : receiveStateUpdate:<target>#<state> # Detailed notation <id> : receiveStateUpdate : target : <target> state : <state> Activities The single activity a TOSCA plan can contains is a specific execute operation Task activity. Execute task Execute allows to execute an operation defined on an entity’s (node or relationship) interface. Graphical representation Grammar workflows : <flow_id> : # Simple notation <id> : execute : <target>#<interface>#<operation> # Detailed notation <id> : execute : target : <target> interface : <interface> operation : <operation> Gateways The only gateways used to define the TOSCA workflows is the parallel gateway. A parallel gateway can be diverging or converging. To ease configuration of the flow the two gateways are considered here a separate elements. Parallel Diverging gateway A parallel diverging gateway allows to specify subflows that will run concurrently. Note that if a task is specified in the flow after a Parallel Diverging Gateway, a Parallel Converging Gateway including all elements from the previous converging gateway is automatically added to the flow. Graphical representation Grammar workflows : <flow_id> : <id> : divergingGateway : <subflow_id_1> : <task_id>... <subflow_id_2> <task_id>... ... <subflow_id_n> <task_id>... Parallel Converging gateway A Parallel Converging gateway allows Graphical representation Grammar workflows : <flow_id> : <id> : convergingGateway : <id_1> <id_2> ... <id_n> "},{"title":"Workflow generation","baseurl":"","url":"/documentation/1.4.0/devops_guide/tosca_concepts_workflows_default.html","date":null,"categories":[],"body":"TOSCA containers uses the default normative types to automatically generate a default workflow. This ease the definition of TOSCA topologies as in most of situations entities are extending from tosca.nodes.Root and tosca.relationships.Root . This section details how the default workflow is generated. "},{"title":"TOSCA grammar","baseurl":"","url":"/documentation/1.4.0/devops_guide/tosca_grammar/tosca_grammar.html","date":null,"categories":[],"body":"This section describes the TOSCA grammar as supported in latest alien4cloud version. Alien4cloud supports multiple versions of the tosca_definitions_version : alien_dsl_1_3_0: DSL used in alien4cloud 1.4.0 alien_dsl_1 2_0: DSL used in alien4cloud 1.2.0 _deprecated alien_dsl_1 1_0: DSL used in alien4cloud 1.1.0 _deprecated . While alien4cloud is still able to parse this definition version it is not officially supported and not recommended. tosca_simple_yaml_1_0: Official Tosca definition. The support is getting improved but some elements of the specification may not be supported yet and therefore we don’t recommend using it with alien4cloud. http://docs.oasis-open.org/tosca/ns/simple/yaml/1.0: Same as tosca_simple_yaml_1_0 The recommended version is alien_dsl_1_3_0 and provides all the functionalities that alien4cloud currently supports. Child pages details the specification as currently supported in alien4cloud. Migration from 1.2.0 version This section highlight the chances between alien_dsl_1_2_0 and alien_dsl_1_3_0 and is intendeed for users to easily migrate to the latest version. Type Validations Previous version of alien4cloud parser were not checking the types used in requirement_definition and capability definition. This has been fixed in 1.4.0, this should not change your types but will impact the order in which you can upload your archives in alien4cloud. Requirement definition Many things have changed in requirement definition dsl. The following grammar definitions highlight the differences between alien_dsl_1_2_0 and alien_dsl_1_3_0: alien_dsl_1_2_0 : - <requirement_name> : <type_of_capability or type_of_node> # required node_filter : <node_filter> description : <description> occurrences : [ min , max ] type : <type_of_relationship> relationship : <type_of_relationship> # both type and relationship keynames could be used but only one at a time. capability : <name_of_target_capability> # name in the target node / should have node defined. alien_dsl_1_3_0 : - <requirement_name> : capability : <type_of_capability> # required node : <type_of_node> node_filter : <node_filter> description : <description> occurrences : [ min , max ] relationship : <type_of_relationship> capability_name : <name_of_target_capability> # name in the target node / should have node defined. Example below details alien_dsl_1_2_0 : node_types : alien4cloud.examples.MyNode : derived_from : tosca.nodes.SoftwareComponent requirements : - host : tosca.nodes.Compute relationship_type : tosca.relationships.HostedOn capability : host occurrences : [ 1 , 1 ] alien_dsl_1_3_0 : node_types : alien4cloud.examples.MyNode : derived_from : tosca.nodes.SoftwareComponent requirements : - host : capability : tosca.capabilities.Root node : tosca.nodes.Compute relationship : tosca.relationships.HostedOn capability_name : host occurrences : [ 1 , 1 ] Deployment artifacts Artifact definition has changed and yaml node does not contains anymore both the name of artifact and the keynames as it has been defined in previous template versions. The previous notation was indeed more confusing and less inline with yaml definition. 1.4.0 also support repositories for artifacts. This section is just focused on templates migration and does not introduce new functionalities. alien_dsl_1_2_0 : node_types : alien4cloud.examples.MyNode : derived_from : tosca.nodes.SoftwareComponent artifacts : - simple_config : config/config.yml - config : config/config.yml type : tosca.artifacts.File relationship_types : alien4cloud.examples.MyRelationship : artifacts : - simple_config : config/config.yml - config : config/config.yml type : tosca.artifacts.File topology_template : input_artifacts : config : type : tosca.artifacts.File node_templates : my_node : type : alien4cloud.examples.MyNode artifacts : config : implementation : config/config.yml type : tosca.artifacts.File alien_dsl_1_3_0 : node_types : alien4cloud.examples.MyNode : derived_from : tosca.nodes.SoftwareComponent artifacts : # Simple definition remains the same - simple_config : config/config.yml # Complex definition is now more in line with yaml and introduce the file keyname and sub-level - config : file : config/config.yml type : tosca.artifacts.File relationship_types : alien4cloud.examples.MyRelationship : artifacts : # Simple definition remains the same - simple_config : config/config.yml # Complex definition is now more in line with yaml and introduce the file keyname and sub-level - config : file : config/config.yml type : tosca.artifacts.File topology_template : input_artifacts : config : type : tosca.artifacts.File node_templates : my_node : type : alien4cloud.examples.MyNode artifacts : config : # implementation keyname is now replaced with file to be in line with the deployment artifact rather than operation notation. file : config/config.yml type : tosca.artifacts.File alien_dsl_1_3_0 also supports simple definition of artifacts on templates and topology inputs while this was not authorized in previous versions: topology_template : input_artifacts : config : config/config.yml node_templates : my_node : type : alien4cloud.examples.MyNode artifacts : config : config/config.yml "},{"title":"Normative Lifecycle","baseurl":"","url":"/documentation/1.4.0/devops_guide/tosca_normative_lifecycle.html","date":null,"categories":[],"body":"TOSCA normative lifecycle is automatically generated by the TOSCA container based on the normative node and relationship types. Note that the TOSCA specification on lifecycle is still being written so this may be subject to changes before v1 release. Lifecycle is based on the normative node interface (tosca.interfaces.node.lifecycle.Standard) and relationship interface (tosca.interfaces.relationship.Configure). Node Lifecycle generation wait for all node that is a target of a DependsOn relationship to reach the started state (current node being source of the relationship). call the node’s create operation call the relationships pre_configure_source (if the node is the relationship source) or pre_configure_target (if the node is the relationship target) call the node’s configure operation call the relationships post_configure_source (if the node is the relationship source) or post_configure_target (if the node is the relationship target) call the node’s start operation call the relationships add_target (on the nodes sources) and add_source (on the nodes targets) operations. Environment Variables When operation scripts are called, some environment variables are filled by the script caller. Node operation For node operation script, the following variables are available: NODE : the node name. INSTANCE : the unique instance ID. INSTANCES : A comma separated list of all available instance IDs. HOST : the node name of the node that host the current one. Additionnal environment variables In addition, the folowing variables are also available: - Inputs parameters : All inputs parameters defined on the operation definiton - Properties : All the properties of the node and its capabilities are available following the naming: * SELF_<PROPERTY_NAME> for node properties * SELF_CAPABILITIES_<CAPABILITY_NAME>_<PROPERTY_NAME> for capabilities properties Relationship operation For relationship operation script, the following variables are available: TARGET_NODE : The node name that is targeted by the relationship. TARGET_INSTANCE : The instance ID that is targeted by the relatonship. TARGET_INSTANCES : Comma separated list of all available instance IDs for the target node. SOURCE_NODE : The node name that is the source of the relationship. SOURCE_INSTANCE : The instance ID of the source of the relationship. SOURCE_INSTANCES : Comma separated list of all available source instance IDs. Additionnal environment variables In addition, the folowing variables are also available: - Inputs parameters : All inputs parameters defined on the operation definiton - Properties : All the properties of the involved nodes and capabilities are available following the naming: * SELF_<PROPERTY_NAME> for relationship properties * SOURCE_<PROPERTY_NAME> for source node properties * TARGET_<PROPERTY_NAME> for target node properties * TARGET_CAPABILITIES_<CAPABILITY_NAME>_<PROPERTY_NAME> for targeted capability property Attribute and multiple instances When an operation defines an input, the value is available by fetching an environment variable. If you have multiple instances, you’ll be able to fetch the input value for all instances by prefixing the inpu name by the instance ID. Let’s imagine you have an relationship’s configure interface operation defined like this: add_target : inputs : TARGET_IP : { get_attribute : [ TARGET , ip_address ] } implementation : scripts/add_target.sh Let’s imagine we have a node named MyNodeS with 2 instances: MyNodeS_1 , MyNodeS_2 . The node MyNodeS is connected to the target node MyNodeT which has also 2 instances MyNodeT_1 and MyNodeT_2 . When the add_target.sh script is executed for the relationship instance that connects MyNodeS_1 to MyNodeT_1 , the following variables will be available: TARGET_NODE = MyNodeT TARGET_INSTANCE = MyNodeT_1 TARGET_INSTANCES = MyNodeT_1,MyNodeT_2 SOURCE_NODE = MyNodeS SOURCE_INSTANCE = MyNodeS_1 SOURCE_INSTANCES = MyNodeS_1,MyNodeS_2 TARGET_IP = 192.168.0.11 MyNodeT_1_TARGET_IP = 192.168.0.11 MyNodeT_2_TARGET_IP = 192.168.0.12 More infos In our samples you can find a topology demo-lifecycle that clearly demonstrate all this behavior. Once deployed, you can find out ALL runtime available environment varibales for the different lifecycle scripts. "},{"title":"Create your own components","baseurl":"","url":"/documentation/1.4.0/devops_guide/design_tutorial/tutorials.html","date":null,"categories":[],"body":" This documentation section is not complete. We recommend you to start with the lamp stack tutorials that have been upgraded more recently. Components Design of a component (Tomcat server) Implementation of a component Topologies Designing a topology in ALIEN Application Creation of an application and running it on a cloud using cloudify PaaS "},{"title":"Component design","baseurl":"","url":"/documentation/1.4.0/devops_guide/design_tutorial/tutorials_component_design.html","date":null,"categories":[],"body":"Target: Middleware experts, architects, operations teams. Goal: Explain how to start with component design. In this tutorial, the component we will focus on is Tomcat Application Server. Define the node type A component in ALIEN is a tosca node type. Information on TOSCA and the grammar can be found on OASIS TC pages and in ALIEN documentation in the components section. This tutorial doesn’t focus on the grammar but on the methodology to define components. The first step to define the component is to define it’s id. In our case, we will define a ‘fastconnect.nodes.Tomcat’ node. This component will be abstract as we don’t plan to include an implementation for now (another member of the team may provide an implementation). More, while an implementation may not be compliant with any Operating System (Linux shell scripts that won’t run on windows) or PaaS (Cloudify specific scripts) etc. The abstract type allows to define an agnostic view of the middleware. A same node may have different implementations, for example a Tomcat Node may have an implementation based on puppet and another based on chef, or even pure shell script. Definition of abstract types is also a good way also to provide separation of concern and to let an Architect define a middleware and let the implementation to the experts. Second step when defining a node is to find from which parent type it should extends, it can be an existing type already uploaded in ALIEN or one of TOSCA normative type . There is multiple reasons to extends from the normative types (or another type that itself extends from a normative type): Workflow automatic generation is based on the fact that the node uses the default lifecycle interfaces that are defined on the normative types. Using normative types is also a good way to leverage ALIEN 4 Cloud facet search (for example I will be able to filter on all ApplicationServer nodes). Finally extending from normative types allows to bootstrap your node with some properties, capabilities and requirements. For example as our Tomcat extends from tosca.nodes.SoftwareComponents it will have a version property that should be specified a host requirement (as a software component must be installed on a compute node). the default feature requirement and relationship that are used to established depends on relationships (to impact the lifecycle generation). In the case of a Tomcat server the normative type that we should extends from is tosca.nodes.ApplicationServer . This node extends itself from tosca.nodes.SoftwareComponents . fastconnect.nodes.Tomcat : abstract : true derived_from : tosca.nodes.ApplicationServer documentation : Tomcat application server is an application server that supports deployment of java web applications (war). It is possible here to create another parent abstract type that supports any Java Application Server. This would allow for any Java Application Server to just extends from the node and leverage common properties, requirements and capabilties (Java requirement, War capability, Java arguments properties etc.). Extension is not mandatory as this will just allow to simplify the definition of multiple bean but will not impact the topology creation. In order to keep this tutorial simple we will just extend our Tomcat from the tosca.nodes.ApplicationServer node type. Properties The first property we want to define is the version of tomcat that this tomcat definition supports. Indeed all the tomcat versions doesn’t have the same capabilities, for example tomcat 7.x supports web-sockets while this is not supported in tomcat 5.x for example. Version property as stated earlier is already defined in SoftwareComponent, it is possible however to override it to add an additional constraint. In this example we want to describe a tomcat node for all versions 7 so we will redefine the version property (with the same version type) and add constraints . Second property that we want to add in this tutorial is the java options to use to startup the Tomcat server. This will allow users to specify the java memory requirements and garbage collection settings. Name Type Required Default Constraints version version true 7 Between 7 (inclusive) and 8 exclusive java_ops string false None None fastconnect.nodes.Tomcat : abstract : true derived_from : tosca.nodes.ApplicationServer documentation : Tomcat application server is an application server that supports deployment of java web applications (war). properties : version : type : version constraints : - greater_or_equal : 7 - less_than : 8 java_ops : type : string Tomcat node with the version between [7 and 8) Of course we could add more properties to the tomcat node in order to allow configuration of other server related properties. In this tutorial we will just use the properties mentioned above. Note that as ALIEN supports the versioning of the archives it is easy to add properties later in a next version of the component. Requirements Next important section to describe on the Tomcat type is the list of requirements. As Tomcat inherit from SoftwareComponent it has an inherited requirement over a Compute node (this requirement can be fulfilled in a topology by using an hosted_on relationship). The other requirement for a Tomcat node is to have a java installed. We will model this by adding a java requirement to the tomcat node. A requirements can express constraints on some of the target capability or node, properties. Here we reference a requirement on a Java Node and specify a constraint on the version of the java node. Name Type occurrences Constraints Notes host tosca.nodes.Compute [1, 1] (default) Inherited from tosca.nodes.SoftwareComponent java fastconnect.nodes.Java [1, 1] (default) Greater or equal than 7 fastconnect.nodes.Tomcat : abstract : true derived_from : tosca.nodes.ApplicationServer documentation : Tomcat application server is an application server that supports deployment of java web applications (war). properties : version : type : version constraints : - greater_or_equal : 7 - less_than : 8 java_ops : type : string requirements : java : type : fastconnect.nodes.Java constraints : version : { greater_or_equal : 1.7 } Tomcat node inherit from the requirement on a hosting compute node that is defined by the SoftwareComponent TOSCA normative node. Here we define an abstract Tomcat node that doesn’t have any specific requirement for the compute node (os type etc.) so we don’t have to override the parent requirement. Of course it is possible to override a parent requirement to specify more advanced constraints. Capabilities Tomcat has multiple capabilities and the two main capabilities that we want to define in this tutorial are the ability to hort some War node(s) on top of Tomcat as well as it’s http endpoint. Name Type Occurences http tosca.capabilities.Endpoint [0, unbounded] (default) war_host fastconnect.nodes.War [0, unbounded] (default) In case of the http capability we want to define the port of the tosca.capabilties.Endpoint to be actually the one define in the fastconnect.nodes.Tomcat : abstract : true derived_from : tosca.nodes.ApplicationServer documentation : Tomcat application server is an application server that supports deployment of java web applications (war). properties : version : type : version constraints : - greater_or_equal : 7 - less_than : 8 java_ops : type : string requirements : java : type : fastconnect.nodes.Java constraints : version : { greater_or_equal : 1.7 } capabilties : http : type : tosca.capabilities.Endpoint properties : port : 8080 war_host : type : fastconnect.capabilities.War Conclusion Following the tutorial you should be able to define your own types to be added in ALIEN repository. TOSCA’s requirement and capabilties mechanisms as well as constraint validations allows users to leverage your types so they can easily build topologies and minimize errors in configurations. The next step is to actually implement the type in order to have a type that can indeed be instantiated in a topology. "},{"title":"Component implementation","baseurl":"","url":"/documentation/1.4.0/devops_guide/design_tutorial/tutorials_component_implementation.html","date":null,"categories":[],"body":"Target: Middleware experts, operations teams. Goal: Explain how to implement a type. This tutorial follows the component design tutorial and we will describe how to implement the component designed in the previous tutorial. In this tutorial we also covers how the component archive can be added and tested through ALIEN. Pre-requisite: A git repository will hold the source code for the component archive. We will also use a Jenkins CI instance in order to demonstrate how we can continuously test our archives and develop components following quality best-practices. Prepare the archive Elements in TOSCA and ALIEN are defined in definitions files that can be packed in a Cloud Service Archive (CSAR). The first task therefore is to prepare the directory structure of our Cloud Service Archive. ├── my-definition-file.yml ├── images │ ├── component-icon.png │ └── ... ├── scripts │ └── install.sh │ └── ... Now that we have a cloud service archive with a definition file, we can edit it to define TOSCA elements. In our case we will focus on creating types. When creating type it is important to correctly defines the meta-informations of the type, and to try to reuse existing nodes, capabilities and requirements. "},{"title":"User Guide","baseurl":"","url":"/documentation/1.4.0/user_guide/user_guide.html","date":null,"categories":[],"body":"Welcome to Alien’s user guide! This section will explain you how to use Alien 4 Cloud. If you have not read it yet, you should probably start to read the concepts section . As a Platform admin you probably should read first the Administration Guide in order to install and setup Alien 4 Cloud. Once done, having your alien4cloud instance up and running you will be interested in the user guide administration section that explain how you can configure users (to grant permissions), plugins, orchestrators, services, and more. As a Dev-Ops we suggest you to read how to use alien’s TOSCA catalog section and to get familiar with TOSCA reference . You can also look at our LAMP Stack Tutorial that provides a good kickoff on building components with TOSCA. "},{"title":"User(s) and Roles management","baseurl":"","url":"/documentation/1.4.0/user_guide/user_management.html","date":null,"categories":[],"body":" LDAP integration If you wish to integrate with an LDAP directory please go here . Note that you can use LDAP for users and eventually role management. You can also manage roles in Alien even for LDAP user if you wish. In addition you can have users managed in LDAP and create some additional user that will be managed within Alien. SAML integration If you use the premium version and wish To use SAML please check the documentation here . Roles In order to edit users in Alien 4 Cloud you must have the ADMIN role. Default username and password when starting alien 4 cloud are admin / admin User(s) In order to manage users go the to page by clicking on the button in the navigation bar. Then click on the user tab of the administration side navigation bar or on the user main icon. The user page allows you to manage both users and groups. On the user tab you can search users and see the list of users matching your request. Create user In order to create a new user within Alien just click on the New User button . The create user modal appears and allows you fill-in initial data for your user. Admin is responsible for setting up the username (that will be used for login) and the password of the user. Limitations We are working on adding the ability for a user to edit it’s details but this is currently not an available feature. Changing user details can now be done only by an ADMIN user through the REST API. Of course when using LDAP integration the password are managed by LDAP and there is no requirement for any management in Alien. Search user Remove user Grant role(s) to a user Group(s) Create a group To create a new group within Alien just click on Add/Remove a user to/from a group Roles in Alien 4 Cloud To understand the roles concept, please refer to this section . These roles describes global roles you can grant to a user. From his/her roles Alien 4 Cloud will display and allow some operations. Role Description ADMIN Manages users, plugins, configure clouds + all other roles. APPLICATIONS_MANAGER Create new application(s). ARCHITECT Create and edit topology template(s). COMPONENTS_BROWSER [Deprecated] Not used anymore for validation. Can list components and see details for any of them COMPONENTS_MANAGER Manage TOSCA cloud service archives to add/remove components from the catalog. A user with no roles can log-in and view the resources for which he has been granted. For example a user with no global roles can still access and manage applications on which he has resources roles (see application and environments roles). "},{"title":"New in 1.4.0","baseurl":"","url":"/documentation/1.4.0/whatsnew.html","date":null,"categories":[],"body":" Alien 4 cloud 1.4.0 is a very important version and we are really proud to deliver it as it brings major improvements in many various aspects of alien4cloud: - Many bug fixes - Much better platform stability and scale support - Great new features that will really ease TOSCA edition, bring better reusability and more opening and better operations on existing deployments. We are also very exited also to start working on alien 4 cloud 1.5.0 that will be a major version with a particular focus on both networking support improvements and post deployment management. Location resources right management Alien 4 cloud 1.4.0 increase flexibility of right management for cloud resources allowing to specify fined grained authorizations to specific cloud resources to some users, group of users, applications or even application environments. On-demand custom location resource templates Custom on demand resources can now be defined as location resource templates, directly in the on-demand tab of the location view. (For more info, see the documentation ) Having created such a template, this means that it is now possible to match abstract nodes to custom resources in a topology, allowing for more flexibility and reusability. Topology variants It is now possible to defined in alien4cloud multiple variants of topologies for a single application version. This basically allows to define for example a development topology that contains all elements on a single compute node (to reduce costs), and a production topology that contains the database and web-application on different servers and eventually add scaling. For more information on ALM concepts in alien4cloud and topology variants go here . In order to see how to configure versions and topology variants (also referred as topology versions) go here . Services Services is a brand new concept in alien4cloud that allow to separate the lifecycle and responsabilities of various elements of your application(s). From a consumer point of view services are really much like on demand resources, the difference here is that while on-demand resources lifecycle is controlled by the consumer, the service lifecycle is actually controlled and managed by the service owner. Find out more on: * how admin can define a service to reuse existing external applications here * how an application in alien4cloud can become a service to be reused by other applications * how to consume a service Improved deployment setup Topology update TOSCA support While support of the network elements are planned for 1.5.0 we already added the support of the PortDef data type support in the new version of the normative types we actually support. We also added the support of private_address that is the new TOSCA name for ip_address and public_address that replaces the deprecated public_ip_address. You can still use any in alien4cloud. The most important improvement on the TOSCA support is the management and injection of the Endpoints ip_address attribute. It is a major improvement as is finally allow to define self-sufficient capabilities and requirements to build efficient relationships. Documentation and sizing recommendations Previous versions of alien4cloud where less robust than the current one and in addition to better response to wrong platform sizing we also provide a more comprehensive sizing guide that will hopefully help you to get the most of the alien4cloud platform. Fixes in 1.4.x Alien 4 cloud is the lasted supported version. Here you can see all bug fixes to improve the stability since the version 1.4.0. New feature Improvement Bug Breaking change Alien 4 Cloud Type Id Description ALIEN-2475 Fixed a bug in /rest/v1/deployments API that returned the first hundred deployments and not the last hundred deployments ALIEN-2489 Fixed an issue that prevented relationships operations to be injected from service side in case of a target service ALIEN-2517 Location resources security can now be managed per environment type ALIEN-2547 Fix the broken search on modal to grant/revoke authorization on location resources for applications, environments and environment types ALIEN-2551 Fix display on services authorizaton for applications, environments and environment types ALIEN-2552 Fixed conflict in Elasticsearch mapping on inputParameters between nodes ALIEN-2578 Fix bug on substitutions ALIEN-2578 Display the expected topology after a change version on the topology catalog ALIEN-2598 Fix 409 error when trying to re-deploy just after a deployment failure Cloudify 4 PaaS Provider Type Id Description ALIEN-2488 Fixed: A4C_EXECUTION_USER was overrided by the value of node property “user” if present ALIEN-2440 Include a new log mechanism for cloudify 4 and alien with a server component ALIEN-2553 Fix error during undeployment of invalid blueprint ALIEN-2575 Fix wrong return on the get_attribute TOSCA function Alien 4 Cloud Premium Type Id Description ALIEN-2550 Migration plugin to lasted version Cloudify 4 PaaS Provider Premium Type Id Description ALIEN-2604 Fix bug on block storage volume ID "}]}