Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 10 additions & 10 deletions backend/app/database/falkor/code-graph-backend/api/graph.py
Original file line number Diff line number Diff line change
Expand Up @@ -46,20 +46,20 @@ def __init__(self, name: str) -> None:

# Initialize the backlog as disabled by default
self.backlog = None

# create indicies
self._safe_create_index(self.g.create_node_range_index, "File", "name", "ext")
self._safe_create_index(self.g.create_node_fulltext_index, "Searchable", "name")

# index File path, name and ext fields
def _safe_create_index(self, func, label, *args):
try:
self.g.create_node_range_index("File", "name", "ext")
except Exception:
pass
func(label, *args)
logging.debug(f"Successfully created/verified index for '{label}'.")
except Exception as e:
if "already exists" in str(e).lower():
logging.info(f"Index for '{label}' already exists.")
else:
logging.error(f"Failed to create index for '{label}': {e}", exc_info=True)
Comment on lines +53 to +61
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, let's find the full graph.py file to understand imports and context
find . -type f -name "graph.py" | grep -i falkor

Repository: AOSSIE-Org/Devr.AI

Length of output: 123


🏁 Script executed:

# Look for requirements files to identify FalkorDB version
find . -type f \( -name "requirements*.txt" -o -name "setup.py" -o -name "pyproject.toml" -o -name "Pipfile" \) | head -20

Repository: AOSSIE-Org/Devr.AI

Length of output: 235


🏁 Script executed:

# Search for FalkorDB imports in the codebase
rg "falkor|FalkorDB" -i --type py | head -30

Repository: AOSSIE-Org/Devr.AI

Length of output: 3693


🏁 Script executed:

# Check FalkorDB version in requirements
cat backend/app/database/falkor/code-graph-backend/requirements.txt

Repository: AOSSIE-Org/Devr.AI

Length of output: 8289


🏁 Script executed:

# Also check the main requirements
cat backend/requirements.txt | grep -i falkor

Repository: AOSSIE-Org/Devr.AI

Length of output: 44


🏁 Script executed:

# Read the full graph.py file to understand the context
wc -l backend/app/database/falkor/code-graph-backend/api/graph.py

Repository: AOSSIE-Org/Devr.AI

Length of output: 125


🏁 Script executed:

# Read the graph.py file
head -100 backend/app/database/falkor/code-graph-backend/api/graph.py

Repository: AOSSIE-Org/Devr.AI

Length of output: 3311


🏁 Script executed:

# Read lines around the _safe_create_index method (lines 53-61 plus surrounding context)
sed -n '1,100p' backend/app/database/falkor/code-graph-backend/api/graph.py

Repository: AOSSIE-Org/Devr.AI

Length of output: 3311


🏁 Script executed:

# Search for other exception handling patterns in the same file
rg "except" backend/app/database/falkor/code-graph-backend/api/graph.py -A 3

Repository: AOSSIE-Org/Devr.AI

Length of output: 391


🏁 Script executed:

# Look for other FalkorDB exception handling in the codebase
rg "except.*falkor" -i backend/app/database/falkor/code-graph-backend/ -A 2

Repository: AOSSIE-Org/Devr.AI

Length of output: 44


🌐 Web query:

FalkorDB 1.0.10 Python client exception types error handling

💡 Result:

In FalkorDB Python client v1.0.10, almost all runtime errors you’ll handle are redis-py exceptions, because the client is a thin wrapper around a redis.Redis(...) connection and ultimately calls execute_command(...) for FalkorDB commands. [1]

Exception types you should expect

1) FalkorDB-py custom exception

  • falkordb.exceptions.SchemaVersionMismatchException (the only documented custom exception type in the package). [2]

2) redis-py exceptions (primary ones to catch)
Catch from redis.exceptions (sync) / redis.asyncio (async), typically:

  • ResponseError: FalkorDB query/command errors (syntax errors, runtime errors like _AR_EXP_UpdateEntityIdx, etc.)
  • ConnectionError: can’t connect / connection dropped
  • TimeoutError: socket timeouts / blocked too long
  • AuthenticationError: bad credentials
  • (optional umbrella) RedisError: base class for most redis-py client exceptions

Practical error-handling pattern

from redis.exceptions import ResponseError, ConnectionError, TimeoutError, AuthenticationError, RedisError
from falkordb.exceptions import SchemaVersionMismatchException

try:
    res = g.query("MATCH (n) RETURN n LIMIT 1")
except ResponseError as e:
    # FalkorDB reported an error executing the command/query
    msg = str(e)
    # inspect msg for FalkorDB error codes/messages if you need branching
    raise
except (ConnectionError, TimeoutError, AuthenticationError) as e:
    # transport / auth problems
    raise
except SchemaVersionMismatchException as e:
    # schema cache/version mismatch (raised by the client’s schema machinery)
    raise
except RedisError as e:
    # any other redis-py error
    raise

Notes:

  • If you’re specifically seeing FalkorDB engine errors while running queries (common in FalkorDB==1.0.10 reports), they will surface as redis.exceptions.ResponseError because the server returns an error reply. [1]
  • v1.0.10 itself is a “version bump” release (plus connection-related tweaks like TLS support via from_url), not an exceptions redesign—so the above model is what you should rely on. [3]

Sources:
[1] FalkorDB client builds on redis.Redis and exposes execute_command (readthedocs module source). (falkordb-py.readthedocs.io)
[2] SchemaVersionMismatchException definition (readthedocs exceptions module). (falkordb-py.readthedocs.io)
[3] v1.0.10 release notes / what changed. (github.com)

Citations:


Catch redis.exceptions.ResponseError instead of broad Exception.

FalkorDB v1.0.10 is a thin wrapper around redis-py, so "index already exists" errors surface as redis.exceptions.ResponseError, not a FalkorDB-specific exception type. Catching Exception masks unrelated errors (connection failures, timeouts, authentication errors) that deserve different handling.

While string-based matching for "already exists" is not ideal, it remains the only viable approach since there's no specific error code exposed. However, catching ResponseError provides better specificity:

from redis.exceptions import ResponseError

def _safe_create_index(self, func, label, *args):
    try:
        func(label, *args)
        logging.debug(f"Successfully created/verified index for '{label}'.")
    except ResponseError as e:
        if "already exists" in str(e).lower():
            logging.info(f"Index for '{label}' already exists.")
        else:
            logging.error(f"Failed to create index for '{label}': {e}", exc_info=True)
🤖 Prompt for AI Agents
In `@backend/app/database/falkor/code-graph-backend/api/graph.py` around lines 53
- 61, Replace the broad Exception handler in _safe_create_index with
redis.exceptions.ResponseError: import ResponseError from redis.exceptions,
change the except Exception as e block to except ResponseError as e, and keep
the existing "already exists" string check and logging behavior (logging.info
for existing index, logging.error with exc_info for other ResponseError cases)
so unrelated exceptions (connection/timeouts/auth) are no longer swallowed by
this method.


# index Function using full-text search
try:
self.g.create_node_fulltext_index("Searchable", "name")
except Exception:
pass

def clone(self, clone: str) -> "Graph":
"""
Expand Down
7 changes: 6 additions & 1 deletion backend/config.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
from dotenv import load_dotenv, find_dotenv
import logging
import os


Expand All @@ -12,5 +13,9 @@
GITHUB_TOKEN = os.getenv("GITHUB_TOKEN") or os.getenv("GH_TOKEN")

MODEL_NAME = os.getenv("EMBEDDING_MODEL", "BAAI/bge-small-en-v1.5")
MAX_BATCH_SIZE = int(os.getenv("EMBEDDING_MAX_BATCH_SIZE", "32"))
try:
MAX_BATCH_SIZE = int(os.getenv("EMBEDDING_MAX_BATCH_SIZE", "32"))
except ValueError:
logging.warning("Invalid integer for EMBEDDING_MAX_BATCH_SIZE. Defaulting to 32.")
MAX_BATCH_SIZE = 32
EMBEDDING_DEVICE = os.getenv("EMBEDDING_DEVICE", "cpu")