Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
50 commits
Select commit Hold shift + click to select a range
5ff5488
Create deployment.yaml
mehar221 May 9, 2025
2c5a958
Create service.yaml
mehar221 May 9, 2025
20e7700
Create configmap.yaml
mehar221 May 9, 2025
658f2df
Create Chart.yaml
mehar221 May 9, 2025
2c3b07d
Create values.yaml
mehar221 May 9, 2025
d365cd8
Create override-values.yaml
mehar221 May 9, 2025
2d725be
Rename override-values.yaml to override-values.json
mehar221 May 9, 2025
eb9bcf6
Update override-values.json
mehar221 May 9, 2025
603e789
Update and rename values.yaml to values.json
mehar221 May 9, 2025
42eda1b
Create config.json
mehar221 May 9, 2025
5560537
Update configmap.yaml
mehar221 May 9, 2025
e75da6a
Update configmap.yaml
mehar221 May 9, 2025
6156640
Update configmap.yaml
mehar221 May 9, 2025
6f7a1f9
Update configmap.yaml
mehar221 May 9, 2025
db9525e
Create config.json
mehar221 May 9, 2025
1b51f12
Update configmap.yaml
mehar221 May 9, 2025
dfb8fa7
Update configmap.yaml
mehar221 May 9, 2025
75dc517
Update configmap.yaml
mehar221 May 14, 2025
640efca
Update configmap.yaml
mehar221 May 19, 2025
2ae4739
Update values.json
mehar221 May 19, 2025
8afee64
Update configmap.yaml
mehar221 May 20, 2025
1775a87
Update configmap.yaml
mehar221 May 20, 2025
2f6b73c
Update configmap.yaml
mehar221 May 20, 2025
2056381
Delete my-helm-chart directory
mehar221 May 20, 2025
5611aae
Create configmap.yaml
mehar221 May 20, 2025
05ae9f6
Create deployment.yaml
mehar221 May 20, 2025
4acc02c
Create service.yaml
mehar221 May 20, 2025
252ff0b
Create values.yaml
mehar221 May 20, 2025
c96d892
Update values.yaml
mehar221 May 20, 2025
6dac302
Create Chart.yaml
mehar221 May 20, 2025
9f4379c
Update configmap.yaml
mehar221 May 20, 2025
748eff5
Create config.json
mehar221 May 20, 2025
cbd665b
Update configmap.yaml
mehar221 May 20, 2025
dd08bb7
Update configmap.yaml
mehar221 May 20, 2025
bbcbf9a
Update config.json
mehar221 May 20, 2025
41be439
Update values.yaml
mehar221 May 20, 2025
7f75759
Create values.yaml
mehar221 May 20, 2025
c4bf3e4
Delete my-helm-chart/values.yaml
mehar221 May 20, 2025
e59d4dc
Update configmap.yaml
mehar221 May 20, 2025
d9f3dbb
Update values.yaml
mehar221 May 20, 2025
9939a19
Update config.json
mehar221 May 20, 2025
b2505f4
Update config.json
mehar221 May 20, 2025
0589918
Update configmap.yaml
mehar221 May 20, 2025
09ebd49
Update config.json
mehar221 May 20, 2025
e42580f
Update configmap.yaml
mehar221 May 20, 2025
aec360a
Update config.json
mehar221 May 20, 2025
e07278e
Update service.yaml
mehar221 May 20, 2025
5e9b338
Create secrets.txt
mehar221 Jun 12, 2025
8d204b3
Merge branch 'harness:main' into main
mehar221 Oct 31, 2025
360286d
FAQs IDP
mehar221 Oct 31, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
160 changes: 159 additions & 1 deletion docs/faqs/internal-developer-portal.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,4 +58,162 @@ During onboarding into IDP we mass onboard all the services using a `catalog-inf

3. In some cases the entities get into the `hasError` state. You can know whether the entity is in orphaned state or `hasError` state, by checking for the **Processing Status** dropdown on the Catalog page

4. Additionally, here is an example [script](https://github.com/harness-community/idp-samples/blob/main/catalog-scripts/identify-and-delete-orphan-entity.py) that finds and delete all the entities that has `NotFoundError`, because the `source-location` for these entities are no more valid (YAML files moved or renamed).
4. Additionally, here is an example [script](https://github.com/harness-community/idp-samples/blob/main/catalog-scripts/identify-and-delete-orphan-entity.py) that finds and delete all the entities that has `NotFoundError`, because the `source-location` for these entities are no more valid (YAML files moved or renamed).


### What is the purpose of Catalog Auto-Discovery with Harness CD Services?

Catalog Auto-Discovery automatically syncs your Harness CD services into the Harness IDP Catalog. This allows you to view and manage CD services directly in the Catalog as IDP entities. The sync is real-time and uni-directional (from Harness CD → IDP), ensuring all service details such as name, identifier, description, and tags remain up to date.

### What are the prerequisites to use Harness CD Auto-Discovery?

Before enabling Catalog Auto-Discovery, ensure the following:
The feature flag IDP_CATALOG_CD_AUTO_DISCOVERY is enabled (contact Harness Support if it’s not).
Harness CD is enabled for your account, and it’s the same account used for Harness IDP.

### Can I edit service details in the IDP Catalog after syncing?

No. The sync between Harness CD and IDP is uni-directional. The IDP entity fields—name, identifier, description, and tags—are read-only in IDP. Any changes to these fields must be made directly in Harness CD.

### How are relationships defined and maintained in IDP 2.0?

In IDP 2.0, relations are directional and automatically managed by the system. You only need to define the primary relation (for example, dependsOn), and Harness automatically generates the reverse relation (dependencyOf). Common relation types include ownedBy/ownerOf, providesApi/apiProvidedBy, consumesApi/apiConsumedBy, and partOf/hasPart. This design reduces redundancy in YAML definitions, prevents errors, and ensures relational consistency across entities.

### What problem does the Multi-Repo Catalog Population Script solve?

In large organizations with hundreds of GitHub repositories, manually onboarding services into the Harness Software Catalog can be slow and error-prone. The Multi-Repo Catalog Population Script automates this process by discovering all repositories in a GitHub organization, generating standardized idp.yaml files for each service, pushing them to a central Git repository, and registering them in Harness using the IDP 2.0 Entities API. This enables consistent, version-controlled catalog management with full GitOps visibility.

### Why should I use the Bitbucket catalog population script instead of the default Bitbucket Catalog Discovery plugin?

The default Bitbucket Catalog Discovery plugin registers one location per repository. In large organizations (e.g., 3000+ repositories), this can cause sync failures—if one repository’s catalog fetch fails, the entire sync fails.
The Bitbucket catalog population script solves this by registering each repository’s catalog-info.yaml as a separate location, ensuring independent synchronization and improved reliability.

### What are the prerequisites for running the Bitbucket catalog population script?

Before executing the script, ensure you have the following:
A Harness API key with access to IDP entities.
Your Bitbucket username and app password (not your email).
A repository to store IDP configuration YAMLs (e.g., chosen_repo).
Python 3 installed with the requests library.
Once set up, you can download the script using:
curl -o idp-catalog-wizard-bitbucket.py https://raw.githubusercontent.com/harness-community/idp-samples/main/catalog-scripts/idp-catalog-wizard-bitbucket.py


### What are the different execution options available for the script?

The script supports multiple modes based on your workflow:
--create-yamls → Generates YAML files for all repositories (manual push required).
--register-yamls → Registers already-created YAMLs in Harness.
--run-all → Performs all steps — create, push, and register YAMLs — in one go.
You can optionally limit the scope using --project_key to work within a specific Bitbucket project instead of the entire workspace.

### What is the purpose of the Kubernetes catalog population script?

The Kubernetes catalog population script automates the onboarding of your Kubernetes resources (Deployments, Services, etc.) into the Harness Internal Developer Portal (IDP).
It scans your Kubernetes cluster, generates IDP-compatible YAML files for each resource, commits them to a central GitHub repository, and registers them with Harness IDP through the Entities API.
This automation ensures a version-controlled, real-time reflection of your cluster in the Harness catalog without any manual intervention.

### What are the prerequisites for running this script successfully?

Before executing the script, ensure the following setup:
Python 3 with libraries: requests, python-dotenv, and kubernetes.
Access to a Kubernetes cluster with kubectl properly configured (kubectl config current-context).
A valid .env file containing:
HARNESS_API_KEY, HARNESS_ACCOUNT_ID, ORG_IDENTIFIER, PROJECT_IDENTIFIER
CONNECTOR_REF, CENTRAL_REPO, GITHUB_TOKEN, and GITHUB_ORG
Permissions to list and get deployments and services in the target namespaces.
⚠️ Ensure that the Harness API key has write access to IDP entities and that the GitHub token has repo and read:org scopes.


### How does the script detect dependencies between Kubernetes resources?

The script uses two intelligent mechanisms for dependency detection:
Service-to-Deployment Mapping:
It compares Service selectors with Deployment labels to identify which Deployments back each Service.
if service_selector and all(deployment_labels.get(k) == v for k, v in service_selector.items()):
implementing_deployments.append(resource["name"])

Environment Variable Analysis:
It scans environment variables inside Deployments to detect references to other Services, revealing implicit dependencies.
if service_name.lower() in env_value.lower():
dependencies.append({"name": service_name, "type": "Service"})
These relationships are automatically included in the generated YAMLs under the dependsOn section in the Harness catalog.


### How do Environment Blueprints interact with IaCM and CD modules in Harness IDP?

Environment Blueprints act as a bridge between IaCM (Infrastructure as Code Management) and CD (Continuous Deployment) within Harness IDP. When a blueprint is instantiated:
IaCM Workspaces provision the required infrastructure based on workspace templates, variables, and cloud connections.
CD pipelines deploy the associated services into the provisioned infrastructure.
The Platform Orchestrator manages dependencies and sequencing to ensure reliable end-to-end provisioning and deployment.
This integration ensures environments are created consistently, remain up-to-date through lifecycle operations, and adhere to organizational standards, while giving developers a self-service interface to manage environments without handling low-level infrastructure or deployment details.

### What is the role of IaCM Workspaces and Workspace Templates in Environment Management?

Workspaces: Containers for infrastructure resources, including IaC code, variables, cloud connections, and state files. They track the status of managed resources for reliable provisioning.
Workspace Templates: Standardize workspace configurations by predefining variables, settings, and options across projects. This ensures consistent infrastructure deployment and simplifies onboarding.

### How do Harness CD and the Platform Orchestrator integrate with Environment Management?

CD Pipelines: Deploy services into provisioned infrastructure as defined in environment blueprints.
Platform Orchestrator: Manages provisioning sequences, dependencies, and cleanup of resources.
Together, they ensure environments are deployed reliably, adhere to governance policies, and are easy for developers to use, while abstracting infrastructure and deployment complexity

### How do IaCM Workspaces and Pipelines support Environment Management?

Workspace Templates standardize infrastructure provisioning by predefining variables, cloud connectors, and repository details.
Provision Pipelines deploy infrastructure (e.g., Kubernetes namespace).
Destroy Pipelines clean up resources after use.
These pipelines ensure environments are reproducible and manageable.

### How are services deployed and managed in Environment Management?

Services are defined in the IDP Catalog and backed by CD services.
CD pipelines (Deploy/Uninstall) handle service deployment into the provisioned infrastructure.
The “Golden Pipeline” model allows multiple services to use a single pipeline with runtime input, while individual pipelines can also be used.

### What is the difference between Backend Proxy and Delegate Proxy in terms of authentication handling?

Backend Proxy: Stores and injects authentication tokens for third-party APIs in the backend. Ideal for APIs that require authentication headers or secret management. Handles CORS, HTTP termination, request logging, retries, and failover.
Delegate Proxy: Intercepts requests in the backend to access internal systems that are not publicly accessible from Harness SaaS. The Delegate runs inside your VPC or local network and uses your infrastructure secrets if they aren’t in Harness.


### When should a plugin bypass both Backend Proxy and Delegate Proxy?

A plugin can bypass proxies when:
The API is publicly accessible and does not require authentication.
You are okay with the client (browser) making direct requests to the API.
Caution: Direct calls cannot leverage Delegate Proxy interception, so internal/private systems cannot be accessed this way.

### Can a plugin use both Backend Proxy and Delegate Proxy simultaneously?

Yes. This is required when:
The plugin calls an internal system that is not accessible from Harness SaaS (Delegate Proxy required).
The API requires authentication or headers managed by the Backend Proxy.
In this setup, the request flow is: Plugin → Backend Proxy → Delegate Interceptor → Internal Service.

### Why are Backend Proxy and Delegate Proxy not needed for endpoints under harness.io?

Endpoints under the harness.io domain are already accessible from within Harness’s infrastructure. Using a Delegate Proxy would be redundant since direct access is inherently available.

### Can custom backends be added as part of IDP custom plugins?

No. IDP custom plugins can use Frontend, Backend Proxy, or Delegate Proxy, but custom backend implementations are not supported. Any backend logic must be handled outside the plugin.

### Can secrets stored in external secret managers be used with Delegate Proxy?

Yes. If secrets are not stored in Harness secret manager, the Delegate Proxy can access secrets from external systems like HashiCorp Vault, cloud provider secret managers, or other internal secret stores.

### Can a workflow belong to multiple groups?

No. A workflow belongs to one group at a time. To show the same workflow in multiple contexts, it would need to be duplicated or referenced separately.

### Can the homepage groups and workflows be exported or version-controlled?

Currently, homepage configurations are managed through IDP Admin Layouts, not as separate version-controlled files.
YAML definitions of workflows themselves can be version-controlled, allowing updates to be reflected in the homepage after a Refresh.

### How does Open Playground help during customization?

Open Playground allows you to preview workflow changes live before publishing.
Useful for testing icon changes, button intents, and workflow order without impacting the main homepage.
7 changes: 7 additions & 0 deletions my-helm-chart/overrides/config.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
myconfig: |-
{
\"authConfig\": {
\"tenant\": \"OIDP\",
\"loginApiUrl\": \"ksdn\"
}
}
6 changes: 6 additions & 0 deletions my-helm-chart/templates/Chart.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
apiVersion: v2
name: rac-app
description: A Helm chart for deploying the RAC application
type: application
version: 0.1.0
appVersion: "1.0.0"
7 changes: 7 additions & 0 deletions my-helm-chart/templates/configmap.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: rac-config-files
data:
config.json: |-
{{ toYaml .Values.myconfig | indent 2 }}
27 changes: 27 additions & 0 deletions my-helm-chart/templates/deployment.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}-app
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ .Release.Name }}-app
template:
metadata:
labels:
app: {{ .Release.Name }}-app
spec:
containers:
- name: app-container
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
ports:
- containerPort: {{ .Values.service.port }}
volumeMounts:
- name: config-volume
mountPath: /app/config
readOnly: true
volumes:
- name: config-volume
configMap:
name: rac-config-files
11 changes: 11 additions & 0 deletions my-helm-chart/templates/service.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
apiVersion: v1
kind: Service
metadata:
name: svc-new
spec:
type: ClusterIP
selector:
app: test-app
ports:
- port: 80
targetPort: 80
10 changes: 10 additions & 0 deletions my-helm-chart/templates/values.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
replicaCount: 1

image:
repository: library/nginx
tag: latest

service:
type: ClusterIP
port: 80

1 change: 1 addition & 0 deletions secrettesting/secrets.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Name: "Manisha"