Dataset Viewer
Auto-converted to Parquet Duplicate
instruction
stringclasses
100 values
code
stringlengths
78
193k
response
stringlengths
259
170k
file
stringlengths
59
203
Generate docstrings for each module
# -*- coding: utf-8 -*- import re import sys import random from typing import List, Tuple import requests from requests.models import Response def find_links_in_text(text: str) -> List[str]: link_pattern = re.compile(r'((?:https?://|www\d{0,3}[.]|[a-z0-9.\-]+[.][a-z]{2,4}/)(?:[^\s()<>]+|\(([^\s()<>]+|(\([^\s()<>]+\)))*\))+(?:\(([^\s()<>]+|(\([^\s()<>]+\)))*\)|[^\s`!()\[\]{};:\'\".,<>?«»“”‘’]))') raw_links = re.findall(link_pattern, text) links = [ str(raw_link[0]) for raw_link in raw_links ] return links def find_links_in_file(filename: str) -> List[str]: with open(filename, mode='r', encoding='utf-8') as file: readme = file.read() index_section = readme.find('## Index') if index_section == -1: index_section = 0 content = readme[index_section:] links = find_links_in_text(content) return links def check_duplicate_links(links: List[str]) -> Tuple[bool, List]: seen = {} duplicates = [] has_duplicate = False for link in links: link = link.rstrip('/') if link not in seen: seen[link] = 1 else: if seen[link] == 1: duplicates.append(link) if duplicates: has_duplicate = True return (has_duplicate, duplicates) def fake_user_agent() -> str: user_agents = [ 'Mozilla/5.0 (Windows NT 6.2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1467.0 Safari/537.36', 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/605.1.15 (KHTML, like Gecko)', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.132 Safari/537.36', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36', ] return random.choice(user_agents) def get_host_from_link(link: str) -> str: host = link.split('://', 1)[1] if '://' in link else link # Remove routes, arguments and anchors if '/' in host: host = host.split('/', 1)[0] elif '?' in host: host = host.split('?', 1)[0] elif '#' in host: host = host.split('#', 1)[0] return host def has_cloudflare_protection(resp: Response) -> bool: code = resp.status_code server = resp.headers.get('Server') or resp.headers.get('server') cloudflare_flags = [ '403 Forbidden', 'cloudflare', 'Cloudflare', 'Security check', 'Please Wait... | Cloudflare', 'We are checking your browser...', 'Please stand by, while we are checking your browser...', 'Checking your browser before accessing', 'This process is automatic.', 'Your browser will redirect to your requested content shortly.', 'Please allow up to 5 seconds', 'DDoS protection by', 'Ray ID:', 'Cloudflare Ray ID:', '_cf_chl', '_cf_chl_opt', '__cf_chl_rt_tk', 'cf-spinner-please-wait', 'cf-spinner-redirecting' ] if code in [403, 503] and server == 'cloudflare': html = resp.text flags_found = [flag in html for flag in cloudflare_flags] any_flag_found = any(flags_found) if any_flag_found: return True return False def check_if_link_is_working(link: str) -> Tuple[bool, str]: has_error = False error_message = '' try: resp = requests.get(link, timeout=25, headers={ 'User-Agent': fake_user_agent(), 'host': get_host_from_link(link) }) code = resp.status_code if code >= 400 and not has_cloudflare_protection(resp): has_error = True error_message = f'ERR:CLT: {code} : {link}' except requests.exceptions.SSLError as error: has_error = True error_message = f'ERR:SSL: {error} : {link}' except requests.exceptions.ConnectionError as error: has_error = True error_message = f'ERR:CNT: {error} : {link}' except (TimeoutError, requests.exceptions.ConnectTimeout): has_error = True error_message = f'ERR:TMO: {link}' except requests.exceptions.TooManyRedirects as error: has_error = True error_message = f'ERR:TMR: {error} : {link}' except (Exception, requests.exceptions.RequestException) as error: has_error = True error_message = f'ERR:UKN: {error} : {link}' return (has_error, error_message) def check_if_list_of_links_are_working(list_of_links: List[str]) -> List[str]: error_messages = [] for link in list_of_links: has_error, error_message = check_if_link_is_working(link) if has_error: error_messages.append(error_message) return error_messages def start_duplicate_links_checker(links: List[str]) -> None: print('Checking for duplicate links...') has_duplicate_link, duplicates_links = check_duplicate_links(links) if has_duplicate_link: print(f'Found duplicate links:') for duplicate_link in duplicates_links: print(duplicate_link) sys.exit(1) else: print('No duplicate links.') def start_links_working_checker(links: List[str]) -> None: print(f'Checking if {len(links)} links are working...') errors = check_if_list_of_links_are_working(links) if errors: num_errors = len(errors) print(f'Apparently {num_errors} links are not working properly. See in:') for error_message in errors: print(error_message) sys.exit(1) def main(filename: str, only_duplicate_links_checker: bool) -> None: links = find_links_in_file(filename) start_duplicate_links_checker(links) if not only_duplicate_links_checker: start_links_working_checker(links) if __name__ == '__main__': num_args = len(sys.argv) only_duplicate_links_checker = False if num_args < 2: print('No .md file passed') sys.exit(1) elif num_args == 3: third_arg = sys.argv[2].lower() if third_arg == '-odlc' or third_arg == '--only_duplicate_links_checker': only_duplicate_links_checker = True else: print(f'Third invalid argument. Usage: python {__file__} [-odlc | --only_duplicate_links_checker]') sys.exit(1) filename = sys.argv[1] main(filename, only_duplicate_links_checker)
--- +++ @@ -10,6 +10,7 @@ def find_links_in_text(text: str) -> List[str]: + """Find links in a text and return a list of URLs.""" link_pattern = re.compile(r'((?:https?://|www\d{0,3}[.]|[a-z0-9.\-]+[.][a-z]{2,4}/)(?:[^\s()<>]+|\(([^\s()<>]+|(\([^\s()<>]+\)))*\))+(?:\(([^\s()<>]+|(\([^\s()<>]+\)))*\)|[^\s`!()\[\]{};:\'\".,<>?«»“”‘’]))') @@ -23,6 +24,7 @@ def find_links_in_file(filename: str) -> List[str]: + """Find links in a file and return a list of URLs from text file.""" with open(filename, mode='r', encoding='utf-8') as file: readme = file.read() @@ -37,6 +39,10 @@ def check_duplicate_links(links: List[str]) -> Tuple[bool, List]: + """Check for duplicated links. + + Returns a tuple with True or False and duplicate list. + """ seen = {} duplicates = [] @@ -57,6 +63,7 @@ def fake_user_agent() -> str: + """Faking user agent as some hosting services block not-whitelisted UA.""" user_agents = [ 'Mozilla/5.0 (Windows NT 6.2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1467.0 Safari/537.36', @@ -86,6 +93,25 @@ def has_cloudflare_protection(resp: Response) -> bool: + """Checks if there is any cloudflare protection in the response. + + Cloudflare implements multiple network protections on a given link, + this script tries to detect if any of them exist in the response from request. + + Common protections have the following HTTP code as a response: + - 403: When host header is missing or incorrect (and more) + - 503: When DDOS protection exists + + See more about it at: + - https://support.cloudflare.com/hc/en-us/articles/115003014512-4xx-Client-Error + - https://support.cloudflare.com/hc/en-us/articles/115003011431-Troubleshooting-Cloudflare-5XX-errors + - https://www.cloudflare.com/ddos/ + - https://superuser.com/a/888526 + + Discussions in issues and pull requests: + - https://github.com/public-apis/public-apis/pull/2409 + - https://github.com/public-apis/public-apis/issues/2960 + """ code = resp.status_code server = resp.headers.get('Server') or resp.headers.get('server') @@ -124,6 +150,15 @@ def check_if_link_is_working(link: str) -> Tuple[bool, str]: + """Checks if a link is working. + + If an error is identified when the request for the link occurs, + the return will be a tuple with the first value True and the second + value a string containing the error message. + + If no errors are identified, the return will be a tuple with the + first value False and the second an empty string. + """ has_error = False error_message = '' @@ -235,4 +270,4 @@ filename = sys.argv[1] - main(filename, only_duplicate_links_checker)+ main(filename, only_duplicate_links_checker)
https://raw.githubusercontent.com/public-apis/public-apis/HEAD/scripts/validate/links.py
Add docstrings that explain purpose and usage
#!/usr/bin/env python3 import sys import os import argparse import re import yaml from bidi.algorithm import get_display def load_config(path): # Default configuration values default = { 'ltr_keywords': [], 'ltr_symbols': [], 'pure_ltr_pattern': r"^[\u0000-\u007F]+$", # Matches ASCII characters (Basic Latin character) 'rtl_chars_pattern': r"[\u0590-\u08FF]", # Matches Right-to-Left (RTL) characters (Arabic, Hebrew, etc.) 'severity': { 'bidi_mismatch': 'error', # A difference between the displayed and logical order of text 'keyword': 'warning', # An LTR keyword (e.g., "HTML") in an RTL context might need an &rlm; 'symbol': 'warning', # An LTR symbol (e.g., "C#") in an RTL context might need an &lrm; 'pure_ltr': 'notice', # A purely LTR segment in an RTL context might need a trailing &lrm; 'author_meta': 'notice' # Specific rules for LTR authors/metadata in RTL contexts. }, 'ignore_meta': ['PDF', 'EPUB', 'HTML', 'podcast', 'videocast'], 'min_ltr_length': 3, 'rlm_entities': ['&rlm;', '&#x200F;', '&#8207;'], 'lrm_entities': ['&lrm;', '&#x200E;', '&#8206;'] } # If a path is specified and the file exists, attempt to load it if path and os.path.exists(path): try: with open(path, encoding='utf-8') as f: data = yaml.safe_load(f) or {} conf = data.get('rtl_config', {}) default.update(conf) except Exception as e: print(f"::warning file={path}::Could not load config: {e}. Using defaults.") # Output to stdout for GitHub Actions # Return the configuration (updated defaults or just defaults) return default def is_rtl_filename(path): name = os.path.basename(path).lower() return any(name.endswith(suf) for suf in ['-ar.md','_ar.md','-he.md','_he.md','-fa.md','_fa.md','-ur.md','_ur.md']) # Regex to identify a Markdown list item (e.g., "* text", "- text") LIST_ITEM_RE = re.compile(r'^\s*[\*\-\+]\s+(.*)') # Regex to extract title, URL, author, and metadata from a formatted book item # Example: Book Title - Author (Metadata) BOOK_ITEM_RE = re.compile( r"^\s*\[(?P<title>.+?)\]\((?P<url>.+?)\)" # Title and URL (required) r"(?:\s*[-–—]\s*(?P<author>[^\(\n\[]+?))?" # Author (optional), separated by -, –, — r"(?:\s*[\(\[](?P<meta>.*?)[\)\]])?\s*$" # Metadata (optional), enclosed in parentheses () or [] ) # Regex to find the dir="rtl" or dir="ltr" attribute in an HTML tag HTML_DIR_ATTR_RE = re.compile(r"dir\s*=\s*(['\"])(rtl|ltr)\1", re.IGNORECASE) # Regex to find <span> tags with a dir attribute SPAN_DIR_RE = re.compile(r'<span[^>]*dir=["\'](rtl|ltr)["\'][^>]*>', re.IGNORECASE) # Regex to identify inline code (text enclosed in single backticks) INLINE_CODE_RE = re.compile(r'^`.*`$') # Regex to identify the start of a code block (```) # Can be preceded by spaces or a '>' character (for blockquotes) CODE_FENCE_START = re.compile(r'^\s*>?\s*```') # Regex to identify text entirely enclosed in parentheses or square brackets. # Useful for skipping segments like "(PDF)" or "[Free]" during analysis. BRACKET_CONTENT_RE = re.compile(r''' (?:^|\W) # Start of line or non-word character (\[|\() # Open square or round bracket ([^\n\)\]]*?) # Content (\]|\)) # Close square or round bracket (?:\W|$) # End of line or non-word character ''', re.VERBOSE | re.UNICODE) # VERBOSE for comments, UNICODE for correct matching def split_by_span(text, base_ctx): # Split the text based on <span> tags tokens = re.split(r'(<span[^>]*dir=["\'](?:rtl|ltr)["\'][^>]*>|</span>)', text) # Initialize the stack with the base context stack = [base_ctx] # Initialize the segments segments = [] # for each token for tok in tokens: # Skip empty tokens if not tok: continue # Check if the token is an opening <span> tag with a dir attribute m = SPAN_DIR_RE.match(tok) # If so, push the new context onto the stack if m: stack.append(m.group(1).lower()); continue # If the token is a closing </span> tag if tok.lower() == '</span>': # Pop the last context from the stack if len(stack) > 1: stack.pop() continue # Otherwise, if the token is not a span tag, it's a text segment. # So, we need to append the tuple (segment, current context) to segments[] # Where the current context is the top element of the stack. segments.append((tok, stack[-1])) # return the list of tuples return segments def lint_file(path, cfg): # Initialize the list of issues issues = [] # Try to read the file content and handle potential errors try: lines = open(path, encoding='utf-8').read().splitlines() except Exception as e: return [f"::error file={path},line=1::Cannot read file: {e}"] # Return as a list of issues # Extract configuration parameters for easier access and readability keywords_orig = cfg['ltr_keywords'] symbols = cfg['ltr_symbols'] pure_ltr_re = re.compile(cfg['pure_ltr_pattern']) rtl_char_re = re.compile(cfg['rtl_chars_pattern']) sev = cfg['severity'] ignore_meta = set(cfg['ignore_meta']) min_len = cfg['min_ltr_length'] # chr(0x200F) = RLM Unicode character # chr(0x200E) = LRM Unicode character # These control character must be added here in the code and not in the YAML configuration file, # due to the fact that if we included them in the YAML file they would be invisible and, therefore, # the YAML file would be less readable RLM = [chr(0x200F)] + cfg['rlm_entities'] LRM = [chr(0x200E)] + cfg['lrm_entities'] # Determine the directionality context of the file (RTL or LTR) based on the filename file_direction_ctx = 'rtl' if is_rtl_filename(path) else 'ltr' # Stack to manage block-level direction contexts for nested divs. # Initialized with the file's base direction context. block_context_stack = [file_direction_ctx] # Iterate over each line of the file with its line number for idx, line in enumerate(lines, 1): # The active block direction context for the current line is the top of the stack. active_block_direction_ctx = block_context_stack[-1] # Skip lines that start a code block (```) if CODE_FENCE_START.match(line): continue # Find all opening and closing <div> tags on the line to handle cases # where there can be multiple <div> opening and closing on the same line div_tags = re.findall(r"(<div[^>]*dir=['\"](rtl|ltr)['\"][^>]*>|</div>)", line, re.IGNORECASE) # Process each found tag in order to correctly update the context stack for tag_tuple in div_tags: # re.findall with multiple capture groups returns a list of tuples: # tag: The full matched tag (e.g., '<div...>' or '</div>') # direction: The captured direction ('rtl' or 'ltr'), or empty for a closing tag tag, direction = tag_tuple # If it's an opening tag with 'markdown="1"', push the new context if tag.startswith('<div') and 'markdown="1"' in tag: block_context_stack.append(direction.lower()) # If it's a closing tag and we are inside a div, pop the context elif tag == '</div>' and len(block_context_stack) > 1: block_context_stack.pop() # Check if the line is a Markdown list item list_item = LIST_ITEM_RE.match(line) # If the line is not a list item, skip to the next line if not list_item: continue # Extract the text content of the list item and remove leading/trailing whitespace text = list_item.group(1).strip() # Extract item parts (title, author, metadata) if it matches the book format book_item = BOOK_ITEM_RE.match(text) # If the current line is a book item if book_item: # Extract title, author, and metadata from the book item title = book_item.group('title') author = (book_item.group('author') or '').strip() meta = (book_item.group('meta') or '').strip() # If the list item is just a link like the link in the section "### Index" of the .md files (i.e., [Title](url)) is_link_only_item = not author and not meta # Otherwise, if it's not a book item else: # Initialize title, author, and meta with empty strings title, author, meta = text, '', '' # Set is_link_only_item to False is_link_only_item = False # Specific check: RTL author followed by LTR metadata (e.g., اسم المؤلف (PDF)) if active_block_direction_ctx == 'rtl' and \ author and meta and \ rtl_char_re.search(author) and pure_ltr_re.match(meta) and \ len(meta) >= min_len and \ not any(author.strip().endswith(rlm_marker) for rlm_marker in RLM): issues.append( f"::{sev['author_meta'].lower()} file={path},line={idx}::RTL author '{author.strip()}' followed by LTR meta '{meta}' may need '&rlm;' after author." ) # Analyze individual parts of the item (title, author, metadata) for part, raw_text in [('title', title), ('author', author), ('meta', meta)]: # Skip if the part is empty or if it's metadata to be ignored (e.g., "PDF") if not raw_text or (part=='meta' and raw_text in ignore_meta): continue # Split the part into segments based on <span> tags with dir attributes segments = split_by_span(raw_text, active_block_direction_ctx) # Filter keywords to avoid duplicates with symbols (a symbol can contain a keyword) filtered_keywords = [kw for kw in keywords_orig] for sym in symbols: filtered_keywords = [kw for kw in filtered_keywords if kw not in sym] # Iterate over each text segment and its directionality context for segment_text, segment_direction_ctx in segments: # Remove leading/trailing whitespace from the segment text s = segment_text.strip() # In the following block of code, it's checked if the segment is entirely enclosed in parentheses or brackets. # In fact, if the content inside is purely LTR or RTL, its display is usually # well-isolated by the parentheses or brackets and less prone to BIDI issues. # Mixed LTR/RTL content inside brackets should still be checked. # Check if the segment is entirely enclosed in parentheses or brackets. m_bracket = BRACKET_CONTENT_RE.fullmatch(s) if m_bracket: # If it is, extract the content inside the parentheses/brackets. inner_content = m_bracket.group(2) # Determine if the inner content is purely LTR or purely RTL. is_pure_ltr_inner = pure_ltr_re.match(inner_content) is not None # Check for pure RTL: contains RTL chars AND no LTR chars (using [A-Za-z0-9] as a proxy for common LTR chars) is_pure_rtl_inner = rtl_char_re.search(inner_content) is not None and re.search(r"[A-Za-z0-9]", inner_content) is None # Skip the segment ONLY if the content inside is purely LTR or purely RTL. if is_pure_ltr_inner or is_pure_rtl_inner: continue # Skip if it's inline code (i.e., `...`) or already contains directionality markers (e.g., &rlm; or &lrm;) if any([ INLINE_CODE_RE.match(s), any(mk in s for mk in RLM+LRM) ]): continue # Check for BIDI mismatch: if the text contains both RTL and LTR # characters and the calculated visual order differs from the logical order. if rtl_char_re.search(s) and re.search(r"[A-Za-z0-9]", s): disp = get_display(s) if disp != s: issues.append( f"::{sev['bidi_mismatch'].lower()} file={path},line={idx}::BIDI mismatch in {part}: the text '{s}' is displayed as '{disp}'" ) # If the segment context is LTR, there is no need to check LTR keywords and LTR symbols # that might need directionality markers, so we can skip the next checks and move on to the next line of the file if segment_direction_ctx != 'rtl': continue # Skip keyword and symbol checks for titles of link-only items (e.g., in the Index section of markdown files) if not (part == 'title' and is_link_only_item): # Check for LTR symbols: if an LTR symbol is present and lacks an '&lrm;' marker for sym in symbols: if sym in s and not any(m in s for m in LRM): issues.append( f"::{sev['symbol'].lower()} file={path},line={idx}::Symbol '{sym}' in {part} '{s}' may need trailing '&lrm;' marker." ) # Check for LTR keywords: if an LTR keyword is present and lacks an RLM marker for kw in filtered_keywords: if kw in s and not any(m in s for m in RLM): issues.append( f"::{sev['keyword'].lower()} file={path},line={idx}::Keyword '{kw}' in {part} '{s}' may need trailing '&rlm;' marker." ) # Check for "Pure LTR" text: if the segment is entirely LTR, # it's not a title, and has a minimum length, it might need a trailing RLM. if (part != 'title') and pure_ltr_re.match(s) and not rtl_char_re.search(s) and len(s)>=min_len: issues.append( f"::{sev['pure_ltr'].lower()} file={path},line={idx}::Pure LTR text '{s}' in {part} of RTL context may need trailing '&rlm;' marker." ) # Check for unclosed div tags at the end of the file if len(block_context_stack) > 1: issues.append( f"::error file={path},line={len(lines)}::Found unclosed <div dir='...'> tag. " f"The final block context is '{block_context_stack[-1]}', not the file's base '{file_direction_ctx}'." ) # Return the list of found issues return issues def get_changed_lines_for_file(filepath): import subprocess changed_lines = set() try: # Get the diff for the file (unified=0 for no context lines) diff = subprocess.check_output( ['git', 'diff', '--unified=0', 'origin/main...', '--', filepath], encoding='utf-8', errors='ignore' ) for line in diff.splitlines(): if line.startswith('@@'): # Example: @@ -10,0 +11,3 @@ m = re.search(r'\+(\d+)(?:,(\d+))?', line) if m: start = int(m.group(1)) count = int(m.group(2) or '1') for i in range(start, start + count): changed_lines.add(i) except Exception: # Silently ignore errors (e.g., unable to find merge base) pass return changed_lines def main(): # Create an ArgumentParser object to handle command-line arguments parser = argparse.ArgumentParser( description="Lints Markdown files for RTL/LTR issues, with PR annotation support." ) # Argument for files/directories to scan parser.add_argument( 'paths_to_scan', nargs='+', help="List of files or directories to scan for all issues." ) # Optional argument for changed files (for PR annotation filtering) parser.add_argument( '--changed-files', nargs='*', default=None, help="List of changed files to generate PR annotations for." ) # Optional argument for the log file path parser.add_argument( '--log-file', default='rtl-linter-output.log', help="File to write all linter output to." ) # Parse the command-line arguments args = parser.parse_args() # Determine the directory where the script is located to find the config file script_dir = os.path.dirname(os.path.abspath(__file__)) # Load the configuration from 'rtl_linter_config.yml' cfg = load_config(os.path.join(script_dir, 'rtl_linter_config.yml')) # Initialize counters for total files processed and errors/warnings found total = errs = 0 # Count errors/warnings ONLY on changed/added lines for PR annotation exit code annotated_errs = 0 # Normalize changed file paths for consistent comparison changed_files_set = set(os.path.normpath(f) for f in args.changed_files) if args.changed_files else set() # Build a map: {filepath: set(line_numbers)} for changed files changed_lines_map = {} for f in changed_files_set: changed_lines_map[f] = get_changed_lines_for_file(f) # Flag to check if any issues were found any_issues = False # Open the specified log file in write mode with UTF-8 encoding with open(args.log_file, 'w', encoding='utf-8') as log_f: # Iterate over each path provided in 'paths_to_scan' for p_scan_arg in args.paths_to_scan: # Normalize the scan path to ensure consistent handling (e.g., slashes) normalized_scan_path = os.path.normpath(p_scan_arg) # If the path is a directory, recursively scan for .md files if os.path.isdir(normalized_scan_path): # Walk through the directory and its subdirectories to find all Markdown files for root, _, files in os.walk(normalized_scan_path): # For each file in the directory for fn in files: # If the file is a Markdown file, lint it if fn.lower().endswith('.md'): file_path = os.path.normpath(os.path.join(root, fn)) total += 1 issues_found = lint_file(file_path, cfg) # Process each issue found for issue_str in issues_found: log_f.write(issue_str + '\n') any_issues = True # Flag to check if any issues were found # For GitHub Actions PR annotations: print only if the file is changed # and the issue is on a line that was actually modified or added in the PR if file_path in changed_files_set: m = re.search(r'line=(\d+)', issue_str) if m and int(m.group(1)) in changed_lines_map.get(file_path, set()): print(issue_str) # Count errors on changed lines for the exit code logic if issue_str.startswith("::error"): annotated_errs += 1 # Count all errors/warnings for reporting/debugging purposes if issue_str.startswith("::error") or issue_str.startswith("::warning"): errs += 1 # If the path is a Markdown file, lint it directly elif normalized_scan_path.lower().endswith('.md'): total += 1 issues_found = lint_file(normalized_scan_path, cfg) # Process each issue found for issue_str in issues_found: # Always write the issue to the log file for full reporting log_f.write(issue_str + '\n') any_issues = True # Flag to check if any issues were found # For GitHub Actions PR annotations: print only if the file is changed # and the issue is on a line that was actually modified or added in the PR if normalized_scan_path in changed_files_set: # Extract the line number from the issue string (e.g., ...line=123::) m = re.search(r'line=(\d+)', issue_str) if m and int(m.group(1)) in changed_lines_map.get(normalized_scan_path, set()): # For GitHub Actions PR annotations: print the annotation # so that GitHub Actions can display it in the PR summary print(issue_str) # Count errors on changed lines for the exit code logic if issue_str.startswith("::error"): annotated_errs += 1 # Count all errors/warnings for reporting/debugging purposes if issue_str.startswith("::error") or issue_str.startswith("::warning"): errs += 1 # If no issues were found, remove the log file if not any_issues: try: os.remove(args.log_file) except Exception: pass # Print a debug message to stderr summarizing the linting process print(f"::notice ::Processed {total} files, found {errs} issues.") # Exit code: 1 only if there are annotated errors/warnings on changed lines sys.exit(1 if annotated_errs else 0) if __name__ == '__main__': main()
--- +++ @@ -1,4 +1,23 @@ #!/usr/bin/env python3 +""" +RTL/LTR Markdown Linter. + +This script analyzes Markdown files to identify potential issues +in the display of mixed Right-To-Left (RTL) and Left-To-Right (LTR) text. +It reads configuration from a `rtl_linter_config.yml` file located in the same +directory as the script. + +Key Features: +- Line-by-line parsing of Markdown list items. +- Detection of HTML 'dir' attributes to switch text direction context. +- Handling of nested 'dir' contexts within '<span>' tags. +- Detection of LTR keywords and symbols that might require Unicode markers. +- BIDI (Bidirectional Algorithm) visual analysis using the 'python-bidi' library. +- Parsing of metadata for book items (title, author, meta). +- Configurable severity levels for detected issues (error, warning, notice). +- Filters to ignore code blocks, inline code, and text within parentheses. +- Specific check for RTL authors followed by LTR metadata. +""" import sys import os import argparse @@ -8,6 +27,20 @@ def load_config(path): + """ + Loads configuration from the specified YAML file. + + If the file does not exist or an error occurs during loading, + default values will be used. + + Args: + path (str): The path to the YAML configuration file. + + Returns: + dict: A dictionary containing the configuration parameters. + Default values are merged with those loaded from the file, + with the latter taking precedence. + """ # Default configuration values default = { 'ltr_keywords': [], @@ -42,6 +75,15 @@ def is_rtl_filename(path): + ''' + Checks if the given filename indicates an RTL filename. + + Args: + path (str): The path to the file. + + Returns: + bool: True if the filename suggests an RTL language, False otherwise. + ''' name = os.path.basename(path).lower() return any(name.endswith(suf) for suf in ['-ar.md','_ar.md','-he.md','_he.md','-fa.md','_fa.md','-ur.md','_ur.md']) @@ -81,6 +123,37 @@ def split_by_span(text, base_ctx): + """ + Splits text into segments based on nested <span> tags with dir attributes. + + Args: + text (str): The input string to split. + base_ctx (str): The base directionality context ('rtl' or 'ltr'). + + Returns: + list: A list of tuples, where each tuple contains a text segment (str) + and its corresponding directionality context ('rtl' or 'ltr'). + + Example of stack behavior: + Input: "Text <span dir='rtl'>RTL <span dir='ltr'>LTR</span> RTL</span> Text" + base_ctx: 'ltr' + + Initial stack: ['ltr'] + Tokens: ["Text ", "<span dir='rtl'>", "RTL ", "<span dir='ltr'>", "LTR", "</span>", " RTL", "</span>", " Text"] + + Processing: + 1. "Text ": segments.append(("Text ", 'ltr')), stack: ['ltr'] + 2. "<span dir='rtl'>": stack.append('rtl'), stack: ['ltr', 'rtl'] + 3. "RTL ": segments.append(("RTL ", 'rtl')), stack: ['ltr', 'rtl'] + 4. "<span dir='ltr'>": stack.append('ltr'), stack: ['ltr', 'rtl', 'ltr'] + 5. "LTR": segments.append(("LTR", 'ltr')), stack: ['ltr', 'rtl', 'ltr'] + 6. "</span>": stack.pop(), stack: ['ltr', 'rtl'] + 7. " RTL": segments.append((" RTL", 'rtl')), stack: ['ltr', 'rtl'] + 8. "</span>": stack.pop(), stack: ['ltr'] + 9. " Text": segments.append((" Text", 'ltr')), stack: ['ltr'] + + Resulting segments: [("Text ", 'ltr'), ("RTL ", 'rtl'), ("LTR", 'ltr'), (" RTL", 'rtl'), (" Text", 'ltr')] + """ # Split the text based on <span> tags tokens = re.split(r'(<span[^>]*dir=["\'](?:rtl|ltr)["\'][^>]*>|</span>)', text) @@ -121,6 +194,17 @@ def lint_file(path, cfg): + """ + Analyzes a single Markdown file for RTL/LTR issues. + + Args: + path (str): The path to the Markdown file to analyze. + cfg (dict): The configuration dictionary. + + Returns: + list: A list of strings, where each string represents a detected issue, + formatted for GitHub Actions output. + """ # Initialize the list of issues issues = [] @@ -318,6 +402,23 @@ return issues def get_changed_lines_for_file(filepath): + """ + Returns a set of line numbers (1-based) that were changed in the given file in the current PR. + + This function uses 'git diff' to compare the current branch with 'origin/main' and extracts + the line numbers of added or modified lines. It is used to restrict PR annotations to only + those lines that have been changed in the pull request. + + Args: + filepath (str): The path to the file to check for changes. + + Returns: + set: A set of 1-based line numbers that were added or modified in the file. + + Note: + - Requires that the script is run inside a Git repository. + - If the merge base cannot be found, returns an empty set and does not print errors. + """ import subprocess changed_lines = set() try: @@ -342,6 +443,21 @@ def main(): + """ + Main entry point for the RTL/LTR Markdown linter. + + Parses command-line arguments, loads configuration, and scans the specified files or directories + for Markdown files. For each file, it detects RTL/LTR issues and writes all findings to a log file. + For files changed in the current PR, only issues on changed lines are printed to stdout as GitHub + Actions annotations. + + Exit code is 1 if any error or warning is found on changed lines, otherwise 0. + + Command-line arguments: + paths_to_scan: List of files or directories to scan for issues. + --changed-files: List of files changed in the PR (for annotation filtering). + --log-file: Path to the output log file (default: rtl-linter-output.log). + """ # Create an ArgumentParser object to handle command-line arguments parser = argparse.ArgumentParser( description="Lints Markdown files for RTL/LTR issues, with PR annotation support." @@ -486,4 +602,4 @@ sys.exit(1 if annotated_errs else 0) if __name__ == '__main__': - main()+ main()
https://raw.githubusercontent.com/EbookFoundation/free-programming-books/HEAD/scripts/rtl_ltr_linter.py
Write docstrings that follow conventions
from abc import ABCMeta, abstractmethod from enum import Enum import sys class Suit(Enum): HEART = 0 DIAMOND = 1 CLUBS = 2 SPADE = 3 class Card(metaclass=ABCMeta): def __init__(self, value, suit): self.value = value self.suit = suit self.is_available = True @property @abstractmethod def value(self): pass @value.setter @abstractmethod def value(self, other): pass class BlackJackCard(Card): def __init__(self, value, suit): super(BlackJackCard, self).__init__(value, suit) def is_ace(self): return True if self._value == 1 else False def is_face_card(self): return True if 10 < self._value <= 13 else False @property def value(self): if self.is_ace() == 1: return 1 elif self.is_face_card(): return 10 else: return self._value @value.setter def value(self, new_value): if 1 <= new_value <= 13: self._value = new_value else: raise ValueError('Invalid card value: {}'.format(new_value)) class Hand(object): def __init__(self, cards): self.cards = cards def add_card(self, card): self.cards.append(card) def score(self): total_value = 0 for card in self.cards: total_value += card.value return total_value class BlackJackHand(Hand): BLACKJACK = 21 def __init__(self, cards): super(BlackJackHand, self).__init__(cards) def score(self): min_over = sys.MAXSIZE max_under = -sys.MAXSIZE for score in self.possible_scores(): if self.BLACKJACK < score < min_over: min_over = score elif max_under < score <= self.BLACKJACK: max_under = score return max_under if max_under != -sys.MAXSIZE else min_over def possible_scores(self): pass class Deck(object): def __init__(self, cards): self.cards = cards self.deal_index = 0 def remaining_cards(self): return len(self.cards) - self.deal_index def deal_card(self): try: card = self.cards[self.deal_index] card.is_available = False self.deal_index += 1 except IndexError: return None return card def shuffle(self): pass
--- +++ @@ -38,6 +38,7 @@ return True if self._value == 1 else False def is_face_card(self): + """Jack = 11, Queen = 12, King = 13""" return True if 10 < self._value <= 13 else False @property @@ -90,6 +91,7 @@ return max_under if max_under != -sys.MAXSIZE else min_over def possible_scores(self): + """Return a list of possible scores, taking Aces into account.""" pass @@ -112,4 +114,4 @@ return card def shuffle(self): - pass+ pass
https://raw.githubusercontent.com/donnemartin/system-design-primer/HEAD/solutions/object_oriented_design/deck_of_cards/deck_of_cards.py
Create docstrings for reusable components
# -*- coding: utf-8 -*- class QueryApi(object): def __init__(self, memory_cache, reverse_index_cluster): self.memory_cache = memory_cache self.reverse_index_cluster = reverse_index_cluster def parse_query(self, query): ... def process_query(self, query): query = self.parse_query(query) results = self.memory_cache.get(query) if results is None: results = self.reverse_index_cluster.process_search(query) self.memory_cache.set(query, results) return results class Node(object): def __init__(self, query, results): self.query = query self.results = results class LinkedList(object): def __init__(self): self.head = None self.tail = None def move_to_front(self, node): ... def append_to_front(self, node): ... def remove_from_tail(self): ... class Cache(object): def __init__(self, MAX_SIZE): self.MAX_SIZE = MAX_SIZE self.size = 0 self.lookup = {} self.linked_list = LinkedList() def get(self, query): node = self.lookup[query] if node is None: return None self.linked_list.move_to_front(node) return node.results def set(self, results, query): node = self.map[query] if node is not None: # Key exists in cache, update the value node.results = results self.linked_list.move_to_front(node) else: # Key does not exist in cache if self.size == self.MAX_SIZE: # Remove the oldest entry from the linked list and lookup self.lookup.pop(self.linked_list.tail.query, None) self.linked_list.remove_from_tail() else: self.size += 1 # Add the new key and value new_node = Node(query, results) self.linked_list.append_to_front(new_node) self.lookup[query] = new_node
--- +++ @@ -8,6 +8,9 @@ self.reverse_index_cluster = reverse_index_cluster def parse_query(self, query): + """Remove markup, break text into terms, deal with typos, + normalize capitalization, convert to use boolean operations. + """ ... def process_query(self, query): @@ -51,6 +54,10 @@ self.linked_list = LinkedList() def get(self, query): + """Get the stored query result from the cache. + + Accessing a node updates its position to the front of the LRU list. + """ node = self.lookup[query] if node is None: return None @@ -58,6 +65,12 @@ return node.results def set(self, results, query): + """Set the result for the given query key in the cache. + + When updating an entry, updates its position to the front of the LRU list. + If the entry is new and the cache is at capacity, removes the oldest entry + before the new entry is added. + """ node = self.map[query] if node is not None: # Key exists in cache, update the value @@ -74,4 +87,4 @@ # Add the new key and value new_node = Node(query, results) self.linked_list.append_to_front(new_node) - self.lookup[query] = new_node+ self.lookup[query] = new_node
https://raw.githubusercontent.com/donnemartin/system-design-primer/HEAD/solutions/system_design/query_cache/query_cache_snippets.py
Provide clean and structured docstrings
from abc import ABCMeta, abstractmethod from enum import Enum class VehicleSize(Enum): MOTORCYCLE = 0 COMPACT = 1 LARGE = 2 class Vehicle(metaclass=ABCMeta): def __init__(self, vehicle_size, license_plate, spot_size): self.vehicle_size = vehicle_size self.license_plate = license_plate self.spot_size self.spots_taken = [] def clear_spots(self): for spot in self.spots_taken: spot.remove_vehicle(self) self.spots_taken = [] def take_spot(self, spot): self.spots_taken.append(spot) @abstractmethod def can_fit_in_spot(self, spot): pass class Motorcycle(Vehicle): def __init__(self, license_plate): super(Motorcycle, self).__init__(VehicleSize.MOTORCYCLE, license_plate, spot_size=1) def can_fit_in_spot(self, spot): return True class Car(Vehicle): def __init__(self, license_plate): super(Car, self).__init__(VehicleSize.COMPACT, license_plate, spot_size=1) def can_fit_in_spot(self, spot): return spot.size in (VehicleSize.LARGE, VehicleSize.COMPACT) class Bus(Vehicle): def __init__(self, license_plate): super(Bus, self).__init__(VehicleSize.LARGE, license_plate, spot_size=5) def can_fit_in_spot(self, spot): return spot.size == VehicleSize.LARGE class ParkingLot(object): def __init__(self, num_levels): self.num_levels = num_levels self.levels = [] # List of Levels def park_vehicle(self, vehicle): for level in self.levels: if level.park_vehicle(vehicle): return True return False class Level(object): SPOTS_PER_ROW = 10 def __init__(self, floor, total_spots): self.floor = floor self.num_spots = total_spots self.available_spots = 0 self.spots = [] # List of ParkingSpots def spot_freed(self): self.available_spots += 1 def park_vehicle(self, vehicle): spot = self._find_available_spot(vehicle) if spot is None: return None else: spot.park_vehicle(vehicle) return spot def _find_available_spot(self, vehicle): pass def _park_starting_at_spot(self, spot, vehicle): pass class ParkingSpot(object): def __init__(self, level, row, spot_number, spot_size, vehicle_size): self.level = level self.row = row self.spot_number = spot_number self.spot_size = spot_size self.vehicle_size = vehicle_size self.vehicle = None def is_available(self): return True if self.vehicle is None else False def can_fit_vehicle(self, vehicle): if self.vehicle is not None: return False return vehicle.can_fit_in_spot(self) def park_vehicle(self, vehicle): pass def remove_vehicle(self): pass
--- +++ @@ -92,9 +92,11 @@ return spot def _find_available_spot(self, vehicle): + """Find an available spot where vehicle can fit, or return None""" pass def _park_starting_at_spot(self, spot, vehicle): + """Occupy starting at spot.spot_number to vehicle.spot_size.""" pass @@ -120,4 +122,4 @@ pass def remove_vehicle(self): - pass+ pass
https://raw.githubusercontent.com/donnemartin/system-design-primer/HEAD/solutions/object_oriented_design/parking_lot/parking_lot.py
Help me comply with documentation standards
class Node(object): def __init__(self, results): self.results = results self.next = next class LinkedList(object): def __init__(self): self.head = None self.tail = None def move_to_front(self, node): pass def append_to_front(self, node): pass def remove_from_tail(self): pass class Cache(object): def __init__(self, MAX_SIZE): self.MAX_SIZE = MAX_SIZE self.size = 0 self.lookup = {} # key: query, value: node self.linked_list = LinkedList() def get(self, query): node = self.lookup.get(query) if node is None: return None self.linked_list.move_to_front(node) return node.results def set(self, results, query): node = self.lookup.get(query) if node is not None: # Key exists in cache, update the value node.results = results self.linked_list.move_to_front(node) else: # Key does not exist in cache if self.size == self.MAX_SIZE: # Remove the oldest entry from the linked list and lookup self.lookup.pop(self.linked_list.tail.query, None) self.linked_list.remove_from_tail() else: self.size += 1 # Add the new key and value new_node = Node(results) self.linked_list.append_to_front(new_node) self.lookup[query] = new_node
--- +++ @@ -30,6 +30,10 @@ self.linked_list = LinkedList() def get(self, query): + """Get the stored query result from the cache. + + Accessing a node updates its position to the front of the LRU list. + """ node = self.lookup.get(query) if node is None: return None @@ -37,6 +41,12 @@ return node.results def set(self, results, query): + """Set the result for the given query key in the cache. + + When updating an entry, updates its position to the front of the LRU list. + If the entry is new and the cache is at capacity, removes the oldest entry + before the new entry is added. + """ node = self.lookup.get(query) if node is not None: # Key exists in cache, update the value @@ -53,4 +63,4 @@ # Add the new key and value new_node = Node(results) self.linked_list.append_to_front(new_node) - self.lookup[query] = new_node+ self.lookup[query] = new_node
https://raw.githubusercontent.com/donnemartin/system-design-primer/HEAD/solutions/object_oriented_design/lru_cache/lru_cache.py
Add detailed docstrings explaining each function
# -*- coding: utf-8 -*- class PagesDataStore(object): def __init__(self, db): self.db = db pass def add_link_to_crawl(self, url): pass def remove_link_to_crawl(self, url): pass def reduce_priority_link_to_crawl(self, url): pass def extract_max_priority_page(self): pass def insert_crawled_link(self, url, signature): pass def crawled_similar(self, signature): pass class Page(object): def __init__(self, url, contents, child_urls): self.url = url self.contents = contents self.child_urls = child_urls self.signature = self.create_signature() def create_signature(self): # Create signature based on url and contents pass class Crawler(object): def __init__(self, pages, data_store, reverse_index_queue, doc_index_queue): self.pages = pages self.data_store = data_store self.reverse_index_queue = reverse_index_queue self.doc_index_queue = doc_index_queue def crawl_page(self, page): for url in page.child_urls: self.data_store.add_link_to_crawl(url) self.reverse_index_queue.generate(page) self.doc_index_queue.generate(page) self.data_store.remove_link_to_crawl(page.url) self.data_store.insert_crawled_link(page.url, page.signature) def crawl(self): while True: page = self.data_store.extract_max_priority_page() if page is None: break if self.data_store.crawled_similar(page.signature): self.data_store.reduce_priority_link_to_crawl(page.url) else: self.crawl_page(page) page = self.data_store.extract_max_priority_page()
--- +++ @@ -8,21 +8,27 @@ pass def add_link_to_crawl(self, url): + """Add the given link to `links_to_crawl`.""" pass def remove_link_to_crawl(self, url): + """Remove the given link from `links_to_crawl`.""" pass def reduce_priority_link_to_crawl(self, url): + """Reduce the priority of a link in `links_to_crawl` to avoid cycles.""" pass def extract_max_priority_page(self): + """Return the highest priority link in `links_to_crawl`.""" pass def insert_crawled_link(self, url, signature): + """Add the given link to `crawled_links`.""" pass def crawled_similar(self, signature): + """Determine if we've already crawled a page matching the given signature""" pass @@ -64,4 +70,4 @@ self.data_store.reduce_priority_link_to_crawl(page.url) else: self.crawl_page(page) - page = self.data_store.extract_max_priority_page()+ page = self.data_store.extract_max_priority_page()
https://raw.githubusercontent.com/donnemartin/system-design-primer/HEAD/solutions/system_design/web_crawler/web_crawler_snippets.py
Write reusable docstrings
# -*- coding: utf-8 -*- from mrjob.job import MRJob class SalesRanker(MRJob): def within_past_week(self, timestamp): ... def mapper(self, _, line): timestamp, product_id, category, quantity = line.split('\t') if self.within_past_week(timestamp): yield (category, product_id), quantity def reducer(self, key, values): yield key, sum(values) def mapper_sort(self, key, value): category, product_id = key quantity = value yield (category, quantity), product_id def reducer_identity(self, key, value): yield key, value def steps(self): return [ self.mr(mapper=self.mapper, reducer=self.reducer), self.mr(mapper=self.mapper_sort, reducer=self.reducer_identity), ] if __name__ == '__main__': SalesRanker.run()
--- +++ @@ -6,17 +6,56 @@ class SalesRanker(MRJob): def within_past_week(self, timestamp): + """Return True if timestamp is within past week, False otherwise.""" ... def mapper(self, _, line): + """Parse each log line, extract and transform relevant lines. + + Emit key value pairs of the form: + + (foo, p1), 2 + (bar, p1), 2 + (bar, p1), 1 + (foo, p2), 3 + (bar, p3), 10 + (foo, p4), 1 + """ timestamp, product_id, category, quantity = line.split('\t') if self.within_past_week(timestamp): yield (category, product_id), quantity def reducer(self, key, values): + """Sum values for each key. + + (foo, p1), 2 + (bar, p1), 3 + (foo, p2), 3 + (bar, p3), 10 + (foo, p4), 1 + """ yield key, sum(values) def mapper_sort(self, key, value): + """Construct key to ensure proper sorting. + + Transform key and value to the form: + + (foo, 2), p1 + (bar, 3), p1 + (foo, 3), p2 + (bar, 10), p3 + (foo, 1), p4 + + The shuffle/sort step of MapReduce will then do a + distributed sort on the keys, resulting in: + + (category1, 1), product4 + (category1, 2), product1 + (category1, 3), product2 + (category2, 3), product1 + (category2, 7), product3 + """ category, product_id = key quantity = value yield (category, quantity), product_id @@ -25,6 +64,7 @@ yield key, value def steps(self): + """Run the map and reduce steps.""" return [ self.mr(mapper=self.mapper, reducer=self.reducer), @@ -34,4 +74,4 @@ if __name__ == '__main__': - SalesRanker.run()+ SalesRanker.run()
https://raw.githubusercontent.com/donnemartin/system-design-primer/HEAD/solutions/system_design/sales_rank/sales_rank_mapreduce.py
Add docstrings to incomplete code
# -*- coding: utf-8 -*- from mrjob.job import MRJob class SpendingByCategory(MRJob): def __init__(self, categorizer): self.categorizer = categorizer ... def current_year_month(self): ... def extract_year_month(self, timestamp): ... def handle_budget_notifications(self, key, total): ... def mapper(self, _, line): timestamp, category, amount = line.split('\t') period = self. extract_year_month(timestamp) if period == self.current_year_month(): yield (period, category), amount def reducer(self, key, values): total = sum(values) self.handle_budget_notifications(key, total) yield key, sum(values) def steps(self): return [ self.mr(mapper=self.mapper, reducer=self.reducer) ] if __name__ == '__main__': SpendingByCategory.run()
--- +++ @@ -10,26 +10,43 @@ ... def current_year_month(self): + """Return the current year and month.""" ... def extract_year_month(self, timestamp): + """Return the year and month portions of the timestamp.""" ... def handle_budget_notifications(self, key, total): + """Call notification API if nearing or exceeded budget.""" ... def mapper(self, _, line): + """Parse each log line, extract and transform relevant lines. + + Emit key value pairs of the form: + + (2016-01, shopping), 25 + (2016-01, shopping), 100 + (2016-01, gas), 50 + """ timestamp, category, amount = line.split('\t') period = self. extract_year_month(timestamp) if period == self.current_year_month(): yield (period, category), amount def reducer(self, key, values): + """Sum values for each key. + + (2016-01, shopping), 125 + (2016-01, gas), 50 + """ total = sum(values) self.handle_budget_notifications(key, total) yield key, sum(values) def steps(self): + """Run the map and reduce steps.""" return [ self.mr(mapper=self.mapper, reducer=self.reducer) @@ -37,4 +54,4 @@ if __name__ == '__main__': - SpendingByCategory.run()+ SpendingByCategory.run()
https://raw.githubusercontent.com/donnemartin/system-design-primer/HEAD/solutions/system_design/mint/mint_mapreduce.py
Create simple docstrings for beginners
# -*- coding: utf-8 -*- from mrjob.job import MRJob class HitCounts(MRJob): def extract_url(self, line): pass def extract_year_month(self, line): pass def mapper(self, _, line): url = self.extract_url(line) period = self.extract_year_month(line) yield (period, url), 1 def reducer(self, key, values): yield key, sum(values) def steps(self): return [ self.mr(mapper=self.mapper, reducer=self.reducer) ] if __name__ == '__main__': HitCounts.run()
--- +++ @@ -6,20 +6,36 @@ class HitCounts(MRJob): def extract_url(self, line): + """Extract the generated url from the log line.""" pass def extract_year_month(self, line): + """Return the year and month portions of the timestamp.""" pass def mapper(self, _, line): + """Parse each log line, extract and transform relevant lines. + + Emit key value pairs of the form: + + (2016-01, url0), 1 + (2016-01, url0), 1 + (2016-01, url1), 1 + """ url = self.extract_url(line) period = self.extract_year_month(line) yield (period, url), 1 def reducer(self, key, values): + """Sum values for each key. + + (2016-01, url0), 2 + (2016-01, url1), 1 + """ yield key, sum(values) def steps(self): + """Run the map and reduce steps.""" return [ self.mr(mapper=self.mapper, reducer=self.reducer) @@ -27,4 +43,4 @@ if __name__ == '__main__': - HitCounts.run()+ HitCounts.run()
https://raw.githubusercontent.com/donnemartin/system-design-primer/HEAD/solutions/system_design/pastebin/pastebin.py
Add docstrings for better understanding
#!/usr/bin/env python3 import json import os import re import sys from datetime import datetime, timezone from pathlib import Path import httpx from build import extract_github_repo, load_stars CACHE_MAX_AGE_HOURS = 12 DATA_DIR = Path(__file__).parent / "data" CACHE_FILE = DATA_DIR / "github_stars.json" README_PATH = Path(__file__).parent.parent / "README.md" GRAPHQL_URL = "https://api.github.com/graphql" BATCH_SIZE = 50 def extract_github_repos(text: str) -> set[str]: repos = set() for url in re.findall(r"https?://github\.com/[^\s)\]]+", text): repo = extract_github_repo(url.split("#")[0].rstrip("/")) if repo: repos.add(repo) return repos def save_cache(cache: dict) -> None: DATA_DIR.mkdir(parents=True, exist_ok=True) CACHE_FILE.write_text( json.dumps(cache, indent=2, ensure_ascii=False) + "\n", encoding="utf-8", ) def build_graphql_query(repos: list[str]) -> str: if not repos: return "" parts = [] for i, repo in enumerate(repos): owner, name = repo.split("/", 1) if '"' in owner or '"' in name: continue parts.append( f'repo_{i}: repository(owner: "{owner}", name: "{name}") ' f"{{ stargazerCount owner {{ login }} defaultBranchRef {{ target {{ ... on Commit {{ committedDate }} }} }} }}" ) if not parts: return "" return "query { " + " ".join(parts) + " }" def parse_graphql_response( data: dict, repos: list[str], ) -> dict[str, dict]: result = {} for i, repo in enumerate(repos): node = data.get(f"repo_{i}") if node is None: continue default_branch = node.get("defaultBranchRef") or {} target = default_branch.get("target") or {} result[repo] = { "stars": node.get("stargazerCount", 0), "owner": node.get("owner", {}).get("login", ""), "last_commit_at": target.get("committedDate", ""), } return result def fetch_batch( repos: list[str], *, client: httpx.Client, ) -> dict[str, dict]: query = build_graphql_query(repos) if not query: return {} resp = client.post(GRAPHQL_URL, json={"query": query}) resp.raise_for_status() result = resp.json() if "errors" in result: for err in result["errors"]: print(f" Warning: {err.get('message', err)}", file=sys.stderr) data = result.get("data", {}) return parse_graphql_response(data, repos) def main() -> None: token = os.environ.get("GITHUB_TOKEN", "") if not token: print("Error: GITHUB_TOKEN environment variable is required.", file=sys.stderr) sys.exit(1) readme_text = README_PATH.read_text(encoding="utf-8") current_repos = extract_github_repos(readme_text) print(f"Found {len(current_repos)} GitHub repos in README.md") cache = load_stars(CACHE_FILE) now = datetime.now(timezone.utc) # Prune entries not in current README pruned = {k: v for k, v in cache.items() if k in current_repos} if len(pruned) < len(cache): print(f"Pruned {len(cache) - len(pruned)} stale cache entries") cache = pruned # Determine which repos need fetching (missing or stale) to_fetch = [] for repo in sorted(current_repos): entry = cache.get(repo) if entry and "fetched_at" in entry: fetched = datetime.fromisoformat(entry["fetched_at"]) age_hours = (now - fetched).total_seconds() / 3600 if age_hours < CACHE_MAX_AGE_HOURS: continue to_fetch.append(repo) print(f"{len(to_fetch)} repos to fetch ({len(current_repos) - len(to_fetch)} cached)") if not to_fetch: save_cache(cache) print("Cache is up to date.") return # Fetch in batches fetched_count = 0 skipped_repos: list[str] = [] with httpx.Client( headers={"Authorization": f"bearer {token}", "Content-Type": "application/json"}, transport=httpx.HTTPTransport(retries=2), timeout=30, ) as client: for i in range(0, len(to_fetch), BATCH_SIZE): batch = to_fetch[i : i + BATCH_SIZE] batch_num = i // BATCH_SIZE + 1 total_batches = (len(to_fetch) + BATCH_SIZE - 1) // BATCH_SIZE print(f"Fetching batch {batch_num}/{total_batches} ({len(batch)} repos)...") try: results = fetch_batch(batch, client=client) except httpx.HTTPStatusError as e: print(f"HTTP error {e.response.status_code}", file=sys.stderr) if e.response.status_code == 401: print("Error: Invalid GITHUB_TOKEN.", file=sys.stderr) sys.exit(1) print("Saving partial cache and exiting.", file=sys.stderr) save_cache(cache) sys.exit(1) now_iso = now.isoformat() for repo in batch: if repo in results: cache[repo] = { "stars": results[repo]["stars"], "owner": results[repo]["owner"], "last_commit_at": results[repo]["last_commit_at"], "fetched_at": now_iso, } fetched_count += 1 else: skipped_repos.append(repo) # Save after each batch in case of interruption save_cache(cache) if skipped_repos: print(f"Skipped {len(skipped_repos)} repos (deleted/private/renamed)") print(f"Done. Fetched {fetched_count} repos, {len(cache)} total cached.") if __name__ == "__main__": main()
--- +++ @@ -1,4 +1,5 @@ #!/usr/bin/env python3 +"""Fetch GitHub star counts and owner info for all GitHub repos in README.md.""" import json import os @@ -20,6 +21,7 @@ def extract_github_repos(text: str) -> set[str]: + """Extract unique owner/repo pairs from GitHub URLs in markdown text.""" repos = set() for url in re.findall(r"https?://github\.com/[^\s)\]]+", text): repo = extract_github_repo(url.split("#")[0].rstrip("/")) @@ -29,6 +31,7 @@ def save_cache(cache: dict) -> None: + """Write the star cache to disk, creating data/ dir if needed.""" DATA_DIR.mkdir(parents=True, exist_ok=True) CACHE_FILE.write_text( json.dumps(cache, indent=2, ensure_ascii=False) + "\n", @@ -37,6 +40,7 @@ def build_graphql_query(repos: list[str]) -> str: + """Build a GraphQL query with aliases for up to 100 repos.""" if not repos: return "" parts = [] @@ -57,6 +61,7 @@ data: dict, repos: list[str], ) -> dict[str, dict]: + """Parse GraphQL response into {owner/repo: {stars, owner}} dict.""" result = {} for i, repo in enumerate(repos): node = data.get(f"repo_{i}") @@ -75,6 +80,7 @@ def fetch_batch( repos: list[str], *, client: httpx.Client, ) -> dict[str, dict]: + """Fetch star data for a batch of repos via GitHub GraphQL API.""" query = build_graphql_query(repos) if not query: return {} @@ -89,6 +95,7 @@ def main() -> None: + """Fetch GitHub stars for all repos in README.md, updating the JSON cache.""" token = os.environ.get("GITHUB_TOKEN", "") if not token: print("Error: GITHUB_TOKEN environment variable is required.", file=sys.stderr) @@ -173,4 +180,4 @@ if __name__ == "__main__": - main()+ main()
https://raw.githubusercontent.com/vinta/awesome-python/HEAD/website/fetch_github_stars.py
Document all public functions with docstrings
from __future__ import annotations import re from typing import TypedDict from markdown_it import MarkdownIt from markdown_it.tree import SyntaxTreeNode from markupsafe import escape class AlsoSee(TypedDict): name: str url: str class ParsedEntry(TypedDict): name: str url: str description: str # inline HTML, properly escaped also_see: list[AlsoSee] class ParsedSection(TypedDict): name: str slug: str description: str # plain text, links resolved to text entries: list[ParsedEntry] entry_count: int preview: str content_html: str # rendered HTML, properly escaped # --- Slugify ---------------------------------------------------------------- _SLUG_NON_ALNUM_RE = re.compile(r"[^a-z0-9\s-]") _SLUG_WHITESPACE_RE = re.compile(r"[\s]+") _SLUG_MULTI_DASH_RE = re.compile(r"-+") def slugify(name: str) -> str: slug = name.lower() slug = _SLUG_NON_ALNUM_RE.sub("", slug) slug = _SLUG_WHITESPACE_RE.sub("-", slug.strip()) slug = _SLUG_MULTI_DASH_RE.sub("-", slug) return slug # --- Inline renderers ------------------------------------------------------- def render_inline_html(children: list[SyntaxTreeNode]) -> str: parts: list[str] = [] for child in children: match child.type: case "text": parts.append(str(escape(child.content))) case "softbreak": parts.append(" ") case "link": href = str(escape(child.attrGet("href") or "")) inner = render_inline_html(child.children) parts.append( f'<a href="{href}" target="_blank" rel="noopener">{inner}</a>' ) case "em": parts.append(f"<em>{render_inline_html(child.children)}</em>") case "strong": parts.append(f"<strong>{render_inline_html(child.children)}</strong>") case "code_inline": parts.append(f"<code>{escape(child.content)}</code>") case "html_inline": parts.append(str(escape(child.content))) return "".join(parts) def render_inline_text(children: list[SyntaxTreeNode]) -> str: parts: list[str] = [] for child in children: match child.type: case "text": parts.append(child.content) case "softbreak": parts.append(" ") case "code_inline": parts.append(child.content) case "em" | "strong" | "link": parts.append(render_inline_text(child.children)) return "".join(parts) # --- AST helpers ------------------------------------------------------------- def _heading_text(node: SyntaxTreeNode) -> str: for child in node.children: if child.type == "inline": return render_inline_text(child.children) return "" def _extract_description(nodes: list[SyntaxTreeNode]) -> str: if not nodes: return "" first = nodes[0] if first.type != "paragraph": return "" for child in first.children: if child.type == "inline" and len(child.children) == 1: em = child.children[0] if em.type == "em": return render_inline_text(em.children) return "" # --- Entry extraction -------------------------------------------------------- _DESC_SEP_RE = re.compile(r"^\s*[-\u2013\u2014]\s*") def _find_child(node: SyntaxTreeNode, child_type: str) -> SyntaxTreeNode | None: for child in node.children: if child.type == child_type: return child return None def _find_inline(node: SyntaxTreeNode) -> SyntaxTreeNode | None: para = _find_child(node, "paragraph") if para is None: return None return _find_child(para, "inline") def _find_first_link(inline: SyntaxTreeNode) -> SyntaxTreeNode | None: for child in inline.children: if child.type == "link": return child return None def _is_leading_link(inline: SyntaxTreeNode, link: SyntaxTreeNode) -> bool: return bool(inline.children) and inline.children[0] is link def _extract_description_html(inline: SyntaxTreeNode, first_link: SyntaxTreeNode) -> str: link_idx = next((i for i, c in enumerate(inline.children) if c is first_link), None) if link_idx is None: return "" desc_children = inline.children[link_idx + 1 :] if not desc_children: return "" html = render_inline_html(desc_children) return _DESC_SEP_RE.sub("", html) def _parse_list_entries(bullet_list: SyntaxTreeNode) -> list[ParsedEntry]: entries: list[ParsedEntry] = [] for list_item in bullet_list.children: if list_item.type != "list_item": continue inline = _find_inline(list_item) if inline is None: continue first_link = _find_first_link(inline) if first_link is None or not _is_leading_link(inline, first_link): # Subcategory label (plain text or text-before-link) — recurse into nested list nested = _find_child(list_item, "bullet_list") if nested: entries.extend(_parse_list_entries(nested)) continue # Entry with a link name = render_inline_text(first_link.children) url = first_link.attrGet("href") or "" desc_html = _extract_description_html(inline, first_link) # Collect also_see from nested bullet_list also_see: list[AlsoSee] = [] nested = _find_child(list_item, "bullet_list") if nested: for sub_item in nested.children: if sub_item.type != "list_item": continue sub_inline = _find_inline(sub_item) if sub_inline: sub_link = _find_first_link(sub_inline) if sub_link: also_see.append(AlsoSee( name=render_inline_text(sub_link.children), url=sub_link.attrGet("href") or "", )) entries.append(ParsedEntry( name=name, url=url, description=desc_html, also_see=also_see, )) return entries def _parse_section_entries(content_nodes: list[SyntaxTreeNode]) -> list[ParsedEntry]: entries: list[ParsedEntry] = [] for node in content_nodes: if node.type == "bullet_list": entries.extend(_parse_list_entries(node)) return entries # --- Content HTML rendering -------------------------------------------------- def _render_bullet_list_html( bullet_list: SyntaxTreeNode, *, is_sub: bool = False, ) -> str: out: list[str] = [] for list_item in bullet_list.children: if list_item.type != "list_item": continue inline = _find_inline(list_item) if inline is None: continue first_link = _find_first_link(inline) if first_link is None or not _is_leading_link(inline, first_link): # Subcategory label (plain text or text-before-link) label = str(escape(render_inline_text(inline.children))) out.append(f'<div class="subcat">{label}</div>') nested = _find_child(list_item, "bullet_list") if nested: out.append(_render_bullet_list_html(nested, is_sub=False)) continue # Entry with a link name = str(escape(render_inline_text(first_link.children))) url = str(escape(first_link.attrGet("href") or "")) if is_sub: out.append(f'<div class="entry-sub"><a href="{url}">{name}</a></div>') else: desc = _extract_description_html(inline, first_link) if desc: out.append( f'<div class="entry"><a href="{url}">{name}</a>' f'<span class="sep">&mdash;</span>{desc}</div>' ) else: out.append(f'<div class="entry"><a href="{url}">{name}</a></div>') # Nested items under an entry with a link are sub-entries nested = _find_child(list_item, "bullet_list") if nested: out.append(_render_bullet_list_html(nested, is_sub=True)) return "\n".join(out) def _render_section_html(content_nodes: list[SyntaxTreeNode]) -> str: parts: list[str] = [] for node in content_nodes: if node.type == "bullet_list": parts.append(_render_bullet_list_html(node)) return "\n".join(parts) # --- Section splitting ------------------------------------------------------- def _group_by_h2( nodes: list[SyntaxTreeNode], ) -> list[ParsedSection]: sections: list[ParsedSection] = [] current_name: str | None = None current_body: list[SyntaxTreeNode] = [] def flush() -> None: nonlocal current_name if current_name is None: return desc = _extract_description(current_body) content_nodes = current_body[1:] if desc else current_body entries = _parse_section_entries(content_nodes) entry_count = len(entries) + sum(len(e["also_see"]) for e in entries) preview = ", ".join(e["name"] for e in entries[:4]) content_html = _render_section_html(content_nodes) sections.append(ParsedSection( name=current_name, slug=slugify(current_name), description=desc, entries=entries, entry_count=entry_count, preview=preview, content_html=content_html, )) current_name = None for node in nodes: if node.type == "heading" and node.tag == "h2": flush() current_name = _heading_text(node) current_body = [] elif current_name is not None: current_body.append(node) flush() return sections def parse_readme(text: str) -> tuple[list[ParsedSection], list[ParsedSection]]: md = MarkdownIt("commonmark") tokens = md.parse(text) root = SyntaxTreeNode(tokens) children = root.children # Find thematic break (---), # Resources, and # Contributing in one pass hr_idx = None resources_idx = None contributing_idx = None for i, node in enumerate(children): if hr_idx is None and node.type == "hr": hr_idx = i elif node.type == "heading" and node.tag == "h1": text_content = _heading_text(node) if text_content == "Resources": resources_idx = i elif text_content == "Contributing": contributing_idx = i if hr_idx is None: return [], [] # Slice into category and resource ranges cat_end = resources_idx or contributing_idx or len(children) cat_nodes = children[hr_idx + 1 : cat_end] res_nodes: list[SyntaxTreeNode] = [] if resources_idx is not None: res_end = contributing_idx or len(children) res_nodes = children[resources_idx + 1 : res_end] categories = _group_by_h2(cat_nodes) resources = _group_by_h2(res_nodes) return categories, resources
--- +++ @@ -1,3 +1,4 @@+"""Parse README.md into structured section data using markdown-it-py AST.""" from __future__ import annotations @@ -39,6 +40,7 @@ def slugify(name: str) -> str: + """Convert a category name to a URL-friendly slug.""" slug = name.lower() slug = _SLUG_NON_ALNUM_RE.sub("", slug) slug = _SLUG_WHITESPACE_RE.sub("-", slug.strip()) @@ -50,6 +52,7 @@ def render_inline_html(children: list[SyntaxTreeNode]) -> str: + """Render inline AST nodes to HTML with proper escaping.""" parts: list[str] = [] for child in children: match child.type: @@ -75,6 +78,7 @@ def render_inline_text(children: list[SyntaxTreeNode]) -> str: + """Render inline AST nodes to plain text (links become their text).""" parts: list[str] = [] for child in children: match child.type: @@ -93,6 +97,7 @@ def _heading_text(node: SyntaxTreeNode) -> str: + """Extract plain text from a heading node.""" for child in node.children: if child.type == "inline": return render_inline_text(child.children) @@ -100,6 +105,10 @@ def _extract_description(nodes: list[SyntaxTreeNode]) -> str: + """Extract description from the first paragraph if it's a single <em> block. + + Pattern: _Libraries for foo._ -> "Libraries for foo." + """ if not nodes: return "" first = nodes[0] @@ -119,6 +128,7 @@ def _find_child(node: SyntaxTreeNode, child_type: str) -> SyntaxTreeNode | None: + """Find first direct child of a given type.""" for child in node.children: if child.type == child_type: return child @@ -126,6 +136,7 @@ def _find_inline(node: SyntaxTreeNode) -> SyntaxTreeNode | None: + """Find the inline node in a list_item's paragraph.""" para = _find_child(node, "paragraph") if para is None: return None @@ -133,6 +144,7 @@ def _find_first_link(inline: SyntaxTreeNode) -> SyntaxTreeNode | None: + """Find the first link node among inline children.""" for child in inline.children: if child.type == "link": return child @@ -140,10 +152,16 @@ def _is_leading_link(inline: SyntaxTreeNode, link: SyntaxTreeNode) -> bool: + """Check if the link is the first child of inline (a real entry, not a subcategory label).""" return bool(inline.children) and inline.children[0] is link def _extract_description_html(inline: SyntaxTreeNode, first_link: SyntaxTreeNode) -> str: + """Extract description HTML from inline content after the first link. + + AST: [link("name"), text(" - Description.")] -> "Description." + The separator (- / en-dash / em-dash) is stripped. + """ link_idx = next((i for i, c in enumerate(inline.children) if c is first_link), None) if link_idx is None: return "" @@ -155,6 +173,13 @@ def _parse_list_entries(bullet_list: SyntaxTreeNode) -> list[ParsedEntry]: + """Extract entries from a bullet_list AST node. + + Handles three patterns: + - Text-only list_item -> subcategory label -> recurse into nested list + - Link list_item with nested link-only items -> entry with also_see + - Link list_item without nesting -> simple entry + """ entries: list[ParsedEntry] = [] for list_item in bullet_list.children: @@ -206,6 +231,7 @@ def _parse_section_entries(content_nodes: list[SyntaxTreeNode]) -> list[ParsedEntry]: + """Extract all entries from a section's content nodes.""" entries: list[ParsedEntry] = [] for node in content_nodes: if node.type == "bullet_list": @@ -221,6 +247,7 @@ *, is_sub: bool = False, ) -> str: + """Render a bullet_list node to HTML with entry/entry-sub/subcat classes.""" out: list[str] = [] for list_item in bullet_list.children: @@ -267,6 +294,7 @@ def _render_section_html(content_nodes: list[SyntaxTreeNode]) -> str: + """Render a section's content nodes to HTML.""" parts: list[str] = [] for node in content_nodes: if node.type == "bullet_list": @@ -280,6 +308,7 @@ def _group_by_h2( nodes: list[SyntaxTreeNode], ) -> list[ParsedSection]: + """Group AST nodes into sections by h2 headings.""" sections: list[ParsedSection] = [] current_name: str | None = None current_body: list[SyntaxTreeNode] = [] @@ -319,6 +348,10 @@ def parse_readme(text: str) -> tuple[list[ParsedSection], list[ParsedSection]]: + """Parse README.md text into categories and resources. + + Returns (categories, resources) where each is a list of ParsedSection dicts. + """ md = MarkdownIt("commonmark") tokens = md.parse(text) root = SyntaxTreeNode(tokens) @@ -352,4 +385,4 @@ categories = _group_by_h2(cat_nodes) resources = _group_by_h2(res_nodes) - return categories, resources+ return categories, resources
https://raw.githubusercontent.com/vinta/awesome-python/HEAD/website/readme_parser.py
Help me document legacy Python code
#!/usr/bin/env python3 import json import re import shutil from pathlib import Path from typing import TypedDict from jinja2 import Environment, FileSystemLoader from readme_parser import parse_readme, slugify # Thematic grouping of categories. Each category name must match exactly # as it appears in README.md (the ## heading text). SECTION_GROUPS: list[tuple[str, list[str]]] = [ ( "Web & API", [ "Web Frameworks", "RESTful API", "GraphQL", "WebSocket", "ASGI Servers", "WSGI Servers", "HTTP Clients", "Template Engine", "Web Asset Management", "Web Content Extracting", "Web Crawling", ], ), ( "Data & ML", [ "Data Analysis", "Data Validation", "Data Visualization", "Machine Learning", "Deep Learning", "Computer Vision", "Natural Language Processing", "Recommender Systems", "Science", "Quantum Computing", ], ), ( "DevOps & Infrastructure", [ "DevOps Tools", "Distributed Computing", "Task Queues", "Job Scheduler", "Serverless Frameworks", "Logging", "Processes", "Shell", "Network Virtualization", "RPC Servers", ], ), ( "Database & Storage", [ "Database", "Database Drivers", "ORM", "Caching", "Search", "Serialization", ], ), ( "Development Tools", [ "Testing", "Debugging Tools", "Code Analysis", "Build Tools", "Refactoring", "Documentation", "Editor Plugins and IDEs", "Interactive Interpreter", ], ), ( "CLI & GUI", [ "Command-line Interface Development", "Command-line Tools", "GUI Development", ], ), ( "Content & Media", [ "Audio", "Video", "Image Processing", "HTML Manipulation", "Text Processing", "Specific Formats Processing", "File Manipulation", "Downloader", ], ), ( "System & Runtime", [ "Asynchronous Programming", "Environment Management", "Package Management", "Package Repositories", "Distribution", "Implementations", "Built-in Classes Enhancement", "Functional Programming", "Configuration Files", ], ), ( "Security & Auth", [ "Authentication", "Cryptography", "Penetration Testing", "Permissions", ], ), ( "Specialized", [ "CMS", "Admin Panels", "Email", "Game Development", "Geolocation", "Hardware", "Internationalization", "Date and Time", "URL Manipulation", "Robotics", "Microsoft Windows", "Miscellaneous", "Algorithms and Design Patterns", "Static Site Generator", ], ), ("Resources", []), # Filled dynamically from parsed resources ] def group_categories( categories: list[dict], resources: list[dict], ) -> list[dict]: cat_by_name = {c["name"]: c for c in categories} groups = [] grouped_names: set[str] = set() for group_name, cat_names in SECTION_GROUPS: grouped_names.update(cat_names) if group_name == "Resources": group_cats = list(resources) else: group_cats = [cat_by_name[n] for n in cat_names if n in cat_by_name] if group_cats: groups.append( { "name": group_name, "slug": slugify(group_name), "categories": group_cats, } ) # Any categories not in a group go into "Other" ungrouped = [c for c in categories if c["name"] not in grouped_names] if ungrouped: groups.append( { "name": "Other", "slug": "other", "categories": ungrouped, } ) return groups class Entry(TypedDict): name: str url: str description: str category: str group: str stars: int | None owner: str | None last_commit_at: str | None class StarData(TypedDict): stars: int owner: str last_commit_at: str fetched_at: str GITHUB_REPO_URL_RE = re.compile(r"^https?://github\.com/([^/]+/[^/]+?)(?:\.git)?/?$") def extract_github_repo(url: str) -> str | None: m = GITHUB_REPO_URL_RE.match(url) return m.group(1) if m else None def load_stars(path: Path) -> dict[str, StarData]: if path.exists(): try: return json.loads(path.read_text(encoding="utf-8")) except json.JSONDecodeError: return {} return {} def sort_entries(entries: list[dict]) -> list[dict]: def sort_key(entry: dict) -> tuple[int, int, str]: stars = entry["stars"] name = entry["name"].lower() if stars is None: return (1, 0, name) return (0, -stars, name) return sorted(entries, key=sort_key) def extract_entries( categories: list[dict], groups: list[dict], ) -> list[dict]: cat_to_group: dict[str, str] = {} for group in groups: for cat in group["categories"]: cat_to_group[cat["name"]] = group["name"] entries: list[dict] = [] for cat in categories: group_name = cat_to_group.get(cat["name"], "Other") for entry in cat["entries"]: entries.append( { "name": entry["name"], "url": entry["url"], "description": entry["description"], "category": cat["name"], "group": group_name, "stars": None, "owner": None, "last_commit_at": None, "also_see": entry["also_see"], } ) return entries def build(repo_root: str) -> None: repo = Path(repo_root) website = repo / "website" readme_text = (repo / "README.md").read_text(encoding="utf-8") subtitle = "" for line in readme_text.split("\n"): stripped = line.strip() if stripped and not stripped.startswith("#"): subtitle = stripped break categories, resources = parse_readme(readme_text) # All fields pre-computed: entry_count, content_html, preview, description total_entries = sum(c["entry_count"] for c in categories) groups = group_categories(categories, resources) entries = extract_entries(categories, groups) stars_data = load_stars(website / "data" / "github_stars.json") for entry in entries: repo_key = extract_github_repo(entry["url"]) if repo_key and repo_key in stars_data: sd = stars_data[repo_key] entry["stars"] = sd["stars"] entry["owner"] = sd["owner"] entry["last_commit_at"] = sd.get("last_commit_at", "") entries = sort_entries(entries) env = Environment( loader=FileSystemLoader(website / "templates"), autoescape=True, ) site_dir = website / "output" if site_dir.exists(): shutil.rmtree(site_dir) site_dir.mkdir(parents=True) tpl_index = env.get_template("index.html") (site_dir / "index.html").write_text( tpl_index.render( categories=categories, resources=resources, groups=groups, subtitle=subtitle, entries=entries, total_entries=total_entries, total_categories=len(categories), ), encoding="utf-8", ) static_src = website / "static" static_dst = site_dir / "static" if static_src.exists(): shutil.copytree(static_src, static_dst, dirs_exist_ok=True) shutil.copy(repo / "README.md", site_dir / "llms.txt") print(f"Built single page with {len(categories)} categories + {len(resources)} resources") print(f"Total entries: {total_entries}") print(f"Output: {site_dir}") if __name__ == "__main__": build(str(Path(__file__).parent.parent))
--- +++ @@ -1,4 +1,5 @@ #!/usr/bin/env python3 +"""Build a single-page HTML site from README.md for the awesome-python website.""" import json import re @@ -153,6 +154,7 @@ categories: list[dict], resources: list[dict], ) -> list[dict]: + """Organize categories and resources into thematic section groups.""" cat_by_name = {c["name"]: c for c in categories} groups = [] grouped_names: set[str] = set() @@ -209,11 +211,13 @@ def extract_github_repo(url: str) -> str | None: + """Extract owner/repo from a GitHub repo URL. Returns None for non-GitHub URLs.""" m = GITHUB_REPO_URL_RE.match(url) return m.group(1) if m else None def load_stars(path: Path) -> dict[str, StarData]: + """Load star data from JSON. Returns empty dict if file doesn't exist or is corrupt.""" if path.exists(): try: return json.loads(path.read_text(encoding="utf-8")) @@ -223,6 +227,7 @@ def sort_entries(entries: list[dict]) -> list[dict]: + """Sort entries by stars descending, then name ascending. No-star entries go last.""" def sort_key(entry: dict) -> tuple[int, int, str]: stars = entry["stars"] @@ -238,6 +243,7 @@ categories: list[dict], groups: list[dict], ) -> list[dict]: + """Flatten categories into individual library entries for table display.""" cat_to_group: dict[str, str] = {} for group in groups: for cat in group["categories"]: @@ -264,6 +270,7 @@ def build(repo_root: str) -> None: + """Main build: parse README, render single-page HTML via Jinja2 templates.""" repo = Path(repo_root) website = repo / "website" readme_text = (repo / "README.md").read_text(encoding="utf-8") @@ -330,4 +337,4 @@ if __name__ == "__main__": - build(str(Path(__file__).parent.parent))+ build(str(Path(__file__).parent.parent))
https://raw.githubusercontent.com/vinta/awesome-python/HEAD/website/build.py
Generate docstrings with examples
from __future__ import annotations class IIRFilter: def __init__(self, order: int) -> None: self.order = order # a_{0} ... a_{k} self.a_coeffs = [1.0] + [0.0] * order # b_{0} ... b_{k} self.b_coeffs = [1.0] + [0.0] * order # x[n-1] ... x[n-k] self.input_history = [0.0] * self.order # y[n-1] ... y[n-k] self.output_history = [0.0] * self.order def set_coefficients(self, a_coeffs: list[float], b_coeffs: list[float]) -> None: if len(a_coeffs) < self.order: a_coeffs = [1.0, *a_coeffs] if len(a_coeffs) != self.order + 1: msg = ( f"Expected a_coeffs to have {self.order + 1} elements " f"for {self.order}-order filter, got {len(a_coeffs)}" ) raise ValueError(msg) if len(b_coeffs) != self.order + 1: msg = ( f"Expected b_coeffs to have {self.order + 1} elements " f"for {self.order}-order filter, got {len(a_coeffs)}" ) raise ValueError(msg) self.a_coeffs = a_coeffs self.b_coeffs = b_coeffs def process(self, sample: float) -> float: result = 0.0 # Start at index 1 and do index 0 at the end. for i in range(1, self.order + 1): result += ( self.b_coeffs[i] * self.input_history[i - 1] - self.a_coeffs[i] * self.output_history[i - 1] ) result = (result + self.b_coeffs[0] * sample) / self.a_coeffs[0] self.input_history[1:] = self.input_history[:-1] self.output_history[1:] = self.output_history[:-1] self.input_history[0] = sample self.output_history[0] = result return result
--- +++ @@ -2,6 +2,26 @@ class IIRFilter: + r""" + N-Order IIR filter + Assumes working with float samples normalized on [-1, 1] + + --- + + Implementation details: + Based on the 2nd-order function from + https://en.wikipedia.org/wiki/Digital_biquad_filter, + this generalized N-order function was made. + + Using the following transfer function + .. math:: H(z)=\frac{b_{0}+b_{1}z^{-1}+b_{2}z^{-2}+...+b_{k}z^{-k}} + {a_{0}+a_{1}z^{-1}+a_{2}z^{-2}+...+a_{k}z^{-k}} + + we can rewrite this to + .. math:: y[n]={\frac{1}{a_{0}}} + \left(\left(b_{0}x[n]+b_{1}x[n-1]+b_{2}x[n-2]+...+b_{k}x[n-k]\right)- + \left(a_{1}y[n-1]+a_{2}y[n-2]+...+a_{k}y[n-k]\right)\right) + """ def __init__(self, order: int) -> None: self.order = order @@ -17,6 +37,21 @@ self.output_history = [0.0] * self.order def set_coefficients(self, a_coeffs: list[float], b_coeffs: list[float]) -> None: + """ + Set the coefficients for the IIR filter. + These should both be of size `order` + 1. + :math:`a_0` may be left out, and it will use 1.0 as default value. + + This method works well with scipy's filter design functions + + >>> # Make a 2nd-order 1000Hz butterworth lowpass filter + >>> import scipy.signal + >>> b_coeffs, a_coeffs = scipy.signal.butter(2, 1000, + ... btype='lowpass', + ... fs=48000) + >>> filt = IIRFilter(2) + >>> filt.set_coefficients(a_coeffs, b_coeffs) + """ if len(a_coeffs) < self.order: a_coeffs = [1.0, *a_coeffs] @@ -38,6 +73,13 @@ self.b_coeffs = b_coeffs def process(self, sample: float) -> float: + """ + Calculate :math:`y[n]` + + >>> filt = IIRFilter(2) + >>> filt.process(0) + 0.0 + """ result = 0.0 # Start at index 1 and do index 0 at the end. @@ -55,4 +97,4 @@ self.input_history[0] = sample self.output_history[0] = result - return result+ return result
https://raw.githubusercontent.com/TheAlgorithms/Python/HEAD/audio_filters/iir_filter.py
Add docstrings to make code maintainable
def backtrack( partial: str, open_count: int, close_count: int, n: int, result: list[str] ) -> None: if len(partial) == 2 * n: # When the combination is complete, add it to the result. result.append(partial) return if open_count < n: # If we can add an open parenthesis, do so, and recurse. backtrack(partial + "(", open_count + 1, close_count, n, result) if close_count < open_count: # If we can add a close parenthesis (it won't make the combination invalid), # do so, and recurse. backtrack(partial + ")", open_count, close_count + 1, n, result) def generate_parenthesis(n: int) -> list[str]: result: list[str] = [] backtrack("", 0, 0, n, result) return result if __name__ == "__main__": import doctest doctest.testmod()
--- +++ @@ -1,8 +1,35 @@+""" +author: Aayush Soni +Given n pairs of parentheses, write a function to generate all +combinations of well-formed parentheses. +Input: n = 2 +Output: ["(())","()()"] +Leetcode link: https://leetcode.com/problems/generate-parentheses/description/ +""" def backtrack( partial: str, open_count: int, close_count: int, n: int, result: list[str] ) -> None: + """ + Generate valid combinations of balanced parentheses using recursion. + + :param partial: A string representing the current combination. + :param open_count: An integer representing the count of open parentheses. + :param close_count: An integer representing the count of close parentheses. + :param n: An integer representing the total number of pairs. + :param result: A list to store valid combinations. + :return: None + + This function uses recursion to explore all possible combinations, + ensuring that at each step, the parentheses remain balanced. + + Example: + >>> result = [] + >>> backtrack("", 0, 0, 2, result) + >>> result + ['(())', '()()'] + """ if len(partial) == 2 * n: # When the combination is complete, add it to the result. result.append(partial) @@ -19,6 +46,29 @@ def generate_parenthesis(n: int) -> list[str]: + """ + Generate valid combinations of balanced parentheses for a given n. + + :param n: An integer representing the number of pairs of parentheses. + :return: A list of strings with valid combinations. + + This function uses a recursive approach to generate the combinations. + + Time Complexity: O(2^(2n)) - In the worst case, we have 2^(2n) combinations. + Space Complexity: O(n) - where 'n' is the number of pairs. + + Example 1: + >>> generate_parenthesis(3) + ['((()))', '(()())', '(())()', '()(())', '()()()'] + + Example 2: + >>> generate_parenthesis(1) + ['()'] + + Example 3: + >>> generate_parenthesis(0) + [''] + """ result: list[str] = [] backtrack("", 0, 0, n, result) @@ -28,4 +78,4 @@ if __name__ == "__main__": import doctest - doctest.testmod()+ doctest.testmod()
https://raw.githubusercontent.com/TheAlgorithms/Python/HEAD/backtracking/generate_parentheses.py
Add missing documentation to my Python functions
def backtrack( needed_sum: int, power: int, current_number: int, current_sum: int, solutions_count: int, ) -> tuple[int, int]: if current_sum == needed_sum: # If the sum of the powers is equal to needed_sum, then we have a solution. solutions_count += 1 return current_sum, solutions_count i_to_n = current_number**power if current_sum + i_to_n <= needed_sum: # If the sum of the powers is less than needed_sum, then continue adding powers. current_sum += i_to_n current_sum, solutions_count = backtrack( needed_sum, power, current_number + 1, current_sum, solutions_count ) current_sum -= i_to_n if i_to_n < needed_sum: # If the power of i is less than needed_sum, then try with the next power. current_sum, solutions_count = backtrack( needed_sum, power, current_number + 1, current_sum, solutions_count ) return current_sum, solutions_count def solve(needed_sum: int, power: int) -> int: if not (1 <= needed_sum <= 1000 and 2 <= power <= 10): raise ValueError( "Invalid input\n" "needed_sum must be between 1 and 1000, power between 2 and 10." ) return backtrack(needed_sum, power, 1, 0, 0)[1] # Return the solutions_count if __name__ == "__main__": import doctest doctest.testmod()
--- +++ @@ -1,3 +1,10 @@+""" +Problem source: https://www.hackerrank.com/challenges/the-power-sum/problem +Find the number of ways that a given integer X, can be expressed as the sum +of the Nth powers of unique, natural numbers. For example, if X=13 and N=2. +We have to find all combinations of unique squares adding up to 13. +The only solution is 2^2+3^2. Constraints: 1<=X<=1000, 2<=N<=10. +""" def backtrack( @@ -7,6 +14,22 @@ current_sum: int, solutions_count: int, ) -> tuple[int, int]: + """ + >>> backtrack(13, 2, 1, 0, 0) + (0, 1) + >>> backtrack(10, 2, 1, 0, 0) + (0, 1) + >>> backtrack(10, 3, 1, 0, 0) + (0, 0) + >>> backtrack(20, 2, 1, 0, 0) + (0, 1) + >>> backtrack(15, 10, 1, 0, 0) + (0, 0) + >>> backtrack(16, 2, 1, 0, 0) + (0, 1) + >>> backtrack(20, 1, 1, 0, 0) + (0, 64) + """ if current_sum == needed_sum: # If the sum of the powers is equal to needed_sum, then we have a solution. solutions_count += 1 @@ -29,6 +52,30 @@ def solve(needed_sum: int, power: int) -> int: + """ + >>> solve(13, 2) + 1 + >>> solve(10, 2) + 1 + >>> solve(10, 3) + 0 + >>> solve(20, 2) + 1 + >>> solve(15, 10) + 0 + >>> solve(16, 2) + 1 + >>> solve(20, 1) + Traceback (most recent call last): + ... + ValueError: Invalid input + needed_sum must be between 1 and 1000, power between 2 and 10. + >>> solve(-10, 5) + Traceback (most recent call last): + ... + ValueError: Invalid input + needed_sum must be between 1 and 1000, power between 2 and 10. + """ if not (1 <= needed_sum <= 1000 and 2 <= power <= 10): raise ValueError( "Invalid input\n" @@ -41,4 +88,4 @@ if __name__ == "__main__": import doctest - doctest.testmod()+ doctest.testmod()
https://raw.githubusercontent.com/TheAlgorithms/Python/HEAD/backtracking/power_sum.py
Document my Python code with docstrings
from __future__ import annotations from typing import Any def generate_all_subsequences(sequence: list[Any]) -> None: create_state_space_tree(sequence, [], 0) def create_state_space_tree( sequence: list[Any], current_subsequence: list[Any], index: int ) -> None: if index == len(sequence): print(current_subsequence) return create_state_space_tree(sequence, current_subsequence, index + 1) current_subsequence.append(sequence[index]) create_state_space_tree(sequence, current_subsequence, index + 1) current_subsequence.pop() if __name__ == "__main__": seq: list[Any] = [1, 2, 3] generate_all_subsequences(seq) seq.clear() seq.extend(["A", "B", "C"]) generate_all_subsequences(seq)
--- +++ @@ -1,3 +1,10 @@+""" +In this problem, we want to determine all possible subsequences +of the given sequence. We use backtracking to solve this problem. + +Time complexity: O(2^n), +where n denotes the length of the given sequence. +""" from __future__ import annotations @@ -11,6 +18,61 @@ def create_state_space_tree( sequence: list[Any], current_subsequence: list[Any], index: int ) -> None: + """ + Creates a state space tree to iterate through each branch using DFS. + We know that each state has exactly two children. + It terminates when it reaches the end of the given sequence. + + :param sequence: The input sequence for which subsequences are generated. + :param current_subsequence: The current subsequence being built. + :param index: The current index in the sequence. + + Example: + >>> sequence = [3, 2, 1] + >>> current_subsequence = [] + >>> create_state_space_tree(sequence, current_subsequence, 0) + [] + [1] + [2] + [2, 1] + [3] + [3, 1] + [3, 2] + [3, 2, 1] + + >>> sequence = ["A", "B"] + >>> current_subsequence = [] + >>> create_state_space_tree(sequence, current_subsequence, 0) + [] + ['B'] + ['A'] + ['A', 'B'] + + >>> sequence = [] + >>> current_subsequence = [] + >>> create_state_space_tree(sequence, current_subsequence, 0) + [] + + >>> sequence = [1, 2, 3, 4] + >>> current_subsequence = [] + >>> create_state_space_tree(sequence, current_subsequence, 0) + [] + [4] + [3] + [3, 4] + [2] + [2, 4] + [2, 3] + [2, 3, 4] + [1] + [1, 4] + [1, 3] + [1, 3, 4] + [1, 2] + [1, 2, 4] + [1, 2, 3] + [1, 2, 3, 4] + """ if index == len(sequence): print(current_subsequence) @@ -28,4 +90,4 @@ seq.clear() seq.extend(["A", "B", "C"]) - generate_all_subsequences(seq)+ generate_all_subsequences(seq)
https://raw.githubusercontent.com/TheAlgorithms/Python/HEAD/backtracking/all_subsequences.py
Add missing documentation to my Python functions
from __future__ import annotations solution = [] def is_safe(board: list[list[int]], row: int, column: int) -> bool: n = len(board) # Size of the board # Check if there is any queen in the same upper column, # left upper diagonal and right upper diagonal return ( all(board[i][j] != 1 for i, j in zip(range(row), [column] * row)) and all( board[i][j] != 1 for i, j in zip(range(row - 1, -1, -1), range(column - 1, -1, -1)) ) and all( board[i][j] != 1 for i, j in zip(range(row - 1, -1, -1), range(column + 1, n)) ) ) def solve(board: list[list[int]], row: int) -> bool: if row >= len(board): """ If the row number exceeds N, we have a board with a successful combination and that combination is appended to the solution list and the board is printed. """ solution.append(board) printboard(board) print() return True for i in range(len(board)): """ For every row, it iterates through each column to check if it is feasible to place a queen there. If all the combinations for that particular branch are successful, the board is reinitialized for the next possible combination. """ if is_safe(board, row, i): board[row][i] = 1 solve(board, row + 1) board[row][i] = 0 return False def printboard(board: list[list[int]]) -> None: for i in range(len(board)): for j in range(len(board)): if board[i][j] == 1: print("Q", end=" ") # Queen is present else: print(".", end=" ") # Empty cell print() # Number of queens (e.g., n=8 for an 8x8 board) n = 8 board = [[0 for i in range(n)] for j in range(n)] solve(board, 0) print("The total number of solutions are:", len(solution))
--- +++ @@ -1,3 +1,12 @@+""" + +The nqueens problem is of placing N queens on a N * N +chess board such that no queen can attack any other queens placed +on that chess board. +This means that one queen cannot have any other queen on its horizontal, vertical and +diagonal lines. + +""" from __future__ import annotations @@ -5,6 +14,34 @@ def is_safe(board: list[list[int]], row: int, column: int) -> bool: + """ + This function returns a boolean value True if it is safe to place a queen there + considering the current state of the board. + + Parameters: + board (2D matrix): The chessboard + row, column: Coordinates of the cell on the board + + Returns: + Boolean Value + + >>> is_safe([[0, 0, 0], [0, 0, 0], [0, 0, 0]], 1, 1) + True + >>> is_safe([[0, 1, 0], [0, 0, 0], [0, 0, 0]], 1, 1) + False + >>> is_safe([[1, 0, 0], [0, 0, 0], [0, 0, 0]], 1, 1) + False + >>> is_safe([[0, 0, 1], [0, 0, 0], [0, 0, 0]], 1, 1) + False + >>> is_safe([[1, 0, 0], [0, 0, 0], [0, 0, 0]], 1, 2) + True + >>> is_safe([[1, 0, 0], [0, 0, 0], [0, 0, 0]], 2, 1) + True + >>> is_safe([[0, 0, 0], [1, 0, 0], [0, 0, 0]], 0, 2) + True + >>> is_safe([[0, 0, 0], [1, 0, 0], [0, 0, 0]], 2, 2) + True + """ n = len(board) # Size of the board @@ -24,6 +61,11 @@ def solve(board: list[list[int]], row: int) -> bool: + """ + This function creates a state space tree and calls the safe function until it + receives a False Boolean and terminates that branch and backtracks to the next + possible solution branch. + """ if row >= len(board): """ If the row number exceeds N, we have a board with a successful combination @@ -48,6 +90,9 @@ def printboard(board: list[list[int]]) -> None: + """ + Prints the boards that have a successful combination. + """ for i in range(len(board)): for j in range(len(board)): if board[i][j] == 1: @@ -61,4 +106,4 @@ n = 8 board = [[0 for i in range(n)] for j in range(n)] solve(board, 0) -print("The total number of solutions are:", len(solution))+print("The total number of solutions are:", len(solution))
https://raw.githubusercontent.com/TheAlgorithms/Python/HEAD/backtracking/n_queens.py
Write docstrings including parameters and return values
from math import cos, sin, sqrt, tau from audio_filters.iir_filter import IIRFilter """ Create 2nd-order IIR filters with Butterworth design. Code based on https://webaudio.github.io/Audio-EQ-Cookbook/audio-eq-cookbook.html Alternatively you can use scipy.signal.butter, which should yield the same results. """ def make_lowpass( frequency: int, samplerate: int, q_factor: float = 1 / sqrt(2), ) -> IIRFilter: w0 = tau * frequency / samplerate _sin = sin(w0) _cos = cos(w0) alpha = _sin / (2 * q_factor) b0 = (1 - _cos) / 2 b1 = 1 - _cos a0 = 1 + alpha a1 = -2 * _cos a2 = 1 - alpha filt = IIRFilter(2) filt.set_coefficients([a0, a1, a2], [b0, b1, b0]) return filt def make_highpass( frequency: int, samplerate: int, q_factor: float = 1 / sqrt(2), ) -> IIRFilter: w0 = tau * frequency / samplerate _sin = sin(w0) _cos = cos(w0) alpha = _sin / (2 * q_factor) b0 = (1 + _cos) / 2 b1 = -1 - _cos a0 = 1 + alpha a1 = -2 * _cos a2 = 1 - alpha filt = IIRFilter(2) filt.set_coefficients([a0, a1, a2], [b0, b1, b0]) return filt def make_bandpass( frequency: int, samplerate: int, q_factor: float = 1 / sqrt(2), ) -> IIRFilter: w0 = tau * frequency / samplerate _sin = sin(w0) _cos = cos(w0) alpha = _sin / (2 * q_factor) b0 = _sin / 2 b1 = 0 b2 = -b0 a0 = 1 + alpha a1 = -2 * _cos a2 = 1 - alpha filt = IIRFilter(2) filt.set_coefficients([a0, a1, a2], [b0, b1, b2]) return filt def make_allpass( frequency: int, samplerate: int, q_factor: float = 1 / sqrt(2), ) -> IIRFilter: w0 = tau * frequency / samplerate _sin = sin(w0) _cos = cos(w0) alpha = _sin / (2 * q_factor) b0 = 1 - alpha b1 = -2 * _cos b2 = 1 + alpha filt = IIRFilter(2) filt.set_coefficients([b2, b1, b0], [b0, b1, b2]) return filt def make_peak( frequency: int, samplerate: int, gain_db: float, q_factor: float = 1 / sqrt(2), ) -> IIRFilter: w0 = tau * frequency / samplerate _sin = sin(w0) _cos = cos(w0) alpha = _sin / (2 * q_factor) big_a = 10 ** (gain_db / 40) b0 = 1 + alpha * big_a b1 = -2 * _cos b2 = 1 - alpha * big_a a0 = 1 + alpha / big_a a1 = -2 * _cos a2 = 1 - alpha / big_a filt = IIRFilter(2) filt.set_coefficients([a0, a1, a2], [b0, b1, b2]) return filt def make_lowshelf( frequency: int, samplerate: int, gain_db: float, q_factor: float = 1 / sqrt(2), ) -> IIRFilter: w0 = tau * frequency / samplerate _sin = sin(w0) _cos = cos(w0) alpha = _sin / (2 * q_factor) big_a = 10 ** (gain_db / 40) pmc = (big_a + 1) - (big_a - 1) * _cos ppmc = (big_a + 1) + (big_a - 1) * _cos mpc = (big_a - 1) - (big_a + 1) * _cos pmpc = (big_a - 1) + (big_a + 1) * _cos aa2 = 2 * sqrt(big_a) * alpha b0 = big_a * (pmc + aa2) b1 = 2 * big_a * mpc b2 = big_a * (pmc - aa2) a0 = ppmc + aa2 a1 = -2 * pmpc a2 = ppmc - aa2 filt = IIRFilter(2) filt.set_coefficients([a0, a1, a2], [b0, b1, b2]) return filt def make_highshelf( frequency: int, samplerate: int, gain_db: float, q_factor: float = 1 / sqrt(2), ) -> IIRFilter: w0 = tau * frequency / samplerate _sin = sin(w0) _cos = cos(w0) alpha = _sin / (2 * q_factor) big_a = 10 ** (gain_db / 40) pmc = (big_a + 1) - (big_a - 1) * _cos ppmc = (big_a + 1) + (big_a - 1) * _cos mpc = (big_a - 1) - (big_a + 1) * _cos pmpc = (big_a - 1) + (big_a + 1) * _cos aa2 = 2 * sqrt(big_a) * alpha b0 = big_a * (ppmc + aa2) b1 = -2 * big_a * pmpc b2 = big_a * (ppmc - aa2) a0 = pmc + aa2 a1 = 2 * mpc a2 = pmc - aa2 filt = IIRFilter(2) filt.set_coefficients([a0, a1, a2], [b0, b1, b2]) return filt
--- +++ @@ -15,6 +15,14 @@ samplerate: int, q_factor: float = 1 / sqrt(2), ) -> IIRFilter: + """ + Creates a low-pass filter + + >>> filter = make_lowpass(1000, 48000) + >>> filter.a_coeffs + filter.b_coeffs # doctest: +NORMALIZE_WHITESPACE + [1.0922959556412573, -1.9828897227476208, 0.9077040443587427, 0.004277569313094809, + 0.008555138626189618, 0.004277569313094809] + """ w0 = tau * frequency / samplerate _sin = sin(w0) _cos = cos(w0) @@ -37,6 +45,14 @@ samplerate: int, q_factor: float = 1 / sqrt(2), ) -> IIRFilter: + """ + Creates a high-pass filter + + >>> filter = make_highpass(1000, 48000) + >>> filter.a_coeffs + filter.b_coeffs # doctest: +NORMALIZE_WHITESPACE + [1.0922959556412573, -1.9828897227476208, 0.9077040443587427, 0.9957224306869052, + -1.9914448613738105, 0.9957224306869052] + """ w0 = tau * frequency / samplerate _sin = sin(w0) _cos = cos(w0) @@ -59,6 +75,14 @@ samplerate: int, q_factor: float = 1 / sqrt(2), ) -> IIRFilter: + """ + Creates a band-pass filter + + >>> filter = make_bandpass(1000, 48000) + >>> filter.a_coeffs + filter.b_coeffs # doctest: +NORMALIZE_WHITESPACE + [1.0922959556412573, -1.9828897227476208, 0.9077040443587427, 0.06526309611002579, + 0, -0.06526309611002579] + """ w0 = tau * frequency / samplerate _sin = sin(w0) _cos = cos(w0) @@ -82,6 +106,14 @@ samplerate: int, q_factor: float = 1 / sqrt(2), ) -> IIRFilter: + """ + Creates an all-pass filter + + >>> filter = make_allpass(1000, 48000) + >>> filter.a_coeffs + filter.b_coeffs # doctest: +NORMALIZE_WHITESPACE + [1.0922959556412573, -1.9828897227476208, 0.9077040443587427, 0.9077040443587427, + -1.9828897227476208, 1.0922959556412573] + """ w0 = tau * frequency / samplerate _sin = sin(w0) _cos = cos(w0) @@ -102,6 +134,14 @@ gain_db: float, q_factor: float = 1 / sqrt(2), ) -> IIRFilter: + """ + Creates a peak filter + + >>> filter = make_peak(1000, 48000, 6) + >>> filter.a_coeffs + filter.b_coeffs # doctest: +NORMALIZE_WHITESPACE + [1.0653405327119334, -1.9828897227476208, 0.9346594672880666, 1.1303715025601122, + -1.9828897227476208, 0.8696284974398878] + """ w0 = tau * frequency / samplerate _sin = sin(w0) _cos = cos(w0) @@ -126,6 +166,14 @@ gain_db: float, q_factor: float = 1 / sqrt(2), ) -> IIRFilter: + """ + Creates a low-shelf filter + + >>> filter = make_lowshelf(1000, 48000, 6) + >>> filter.a_coeffs + filter.b_coeffs # doctest: +NORMALIZE_WHITESPACE + [3.0409336710888786, -5.608870992220748, 2.602157875636628, 3.139954022810743, + -5.591841778072785, 2.5201667380627257] + """ w0 = tau * frequency / samplerate _sin = sin(w0) _cos = cos(w0) @@ -155,6 +203,14 @@ gain_db: float, q_factor: float = 1 / sqrt(2), ) -> IIRFilter: + """ + Creates a high-shelf filter + + >>> filter = make_highshelf(1000, 48000, 6) + >>> filter.a_coeffs + filter.b_coeffs # doctest: +NORMALIZE_WHITESPACE + [2.2229172136088806, -3.9587208137297303, 1.7841414181566304, 4.295432981120543, + -7.922740859457287, 3.6756456963725253] + """ w0 = tau * frequency / samplerate _sin = sin(w0) _cos = cos(w0) @@ -175,4 +231,4 @@ filt = IIRFilter(2) filt.set_coefficients([a0, a1, a2], [b0, b1, b2]) - return filt+ return filt
https://raw.githubusercontent.com/TheAlgorithms/Python/HEAD/audio_filters/butterworth_filter.py
Create docstrings for each class method
from __future__ import annotations def generate_all_permutations(sequence: list[int | str]) -> None: create_state_space_tree(sequence, [], 0, [0 for i in range(len(sequence))]) def create_state_space_tree( sequence: list[int | str], current_sequence: list[int | str], index: int, index_used: list[int], ) -> None: if index == len(sequence): print(current_sequence) return for i in range(len(sequence)): if not index_used[i]: current_sequence.append(sequence[i]) index_used[i] = True create_state_space_tree(sequence, current_sequence, index + 1, index_used) current_sequence.pop() index_used[i] = False """ remove the comment to take an input from the user print("Enter the elements") sequence = list(map(int, input().split())) """ sequence: list[int | str] = [3, 1, 2, 4] generate_all_permutations(sequence) sequence_2: list[int | str] = ["A", "B", "C"] generate_all_permutations(sequence_2)
--- +++ @@ -1,3 +1,10 @@+""" +In this problem, we want to determine all possible permutations +of the given sequence. We use backtracking to solve this problem. + +Time complexity: O(n! * n), +where n denotes the length of the given sequence. +""" from __future__ import annotations @@ -12,6 +19,47 @@ index: int, index_used: list[int], ) -> None: + """ + Creates a state space tree to iterate through each branch using DFS. + We know that each state has exactly len(sequence) - index children. + It terminates when it reaches the end of the given sequence. + + :param sequence: The input sequence for which permutations are generated. + :param current_sequence: The current permutation being built. + :param index: The current index in the sequence. + :param index_used: list to track which elements are used in permutation. + + Example 1: + >>> sequence = [1, 2, 3] + >>> current_sequence = [] + >>> index_used = [False, False, False] + >>> create_state_space_tree(sequence, current_sequence, 0, index_used) + [1, 2, 3] + [1, 3, 2] + [2, 1, 3] + [2, 3, 1] + [3, 1, 2] + [3, 2, 1] + + Example 2: + >>> sequence = ["A", "B", "C"] + >>> current_sequence = [] + >>> index_used = [False, False, False] + >>> create_state_space_tree(sequence, current_sequence, 0, index_used) + ['A', 'B', 'C'] + ['A', 'C', 'B'] + ['B', 'A', 'C'] + ['B', 'C', 'A'] + ['C', 'A', 'B'] + ['C', 'B', 'A'] + + Example 3: + >>> sequence = [1] + >>> current_sequence = [] + >>> index_used = [False] + >>> create_state_space_tree(sequence, current_sequence, 0, index_used) + [1] + """ if index == len(sequence): print(current_sequence) @@ -37,4 +85,4 @@ generate_all_permutations(sequence) sequence_2: list[int | str] = ["A", "B", "C"] -generate_all_permutations(sequence_2)+generate_all_permutations(sequence_2)
https://raw.githubusercontent.com/TheAlgorithms/Python/HEAD/backtracking/all_permutations.py
Write docstrings that follow conventions
import string def backtrack( current_word: str, path: list[str], end_word: str, word_set: set[str] ) -> list[str]: # Base case: If the current word is the end word, return the path if current_word == end_word: return path # Try all possible single-letter transformations for i in range(len(current_word)): for c in string.ascii_lowercase: # Try changing each letter transformed_word = current_word[:i] + c + current_word[i + 1 :] if transformed_word in word_set: word_set.remove(transformed_word) # Recur with the new word added to the path result = backtrack( transformed_word, [*path, transformed_word], end_word, word_set ) if result: # valid transformation found return result word_set.add(transformed_word) # backtrack return [] # No valid transformation found def word_ladder(begin_word: str, end_word: str, word_set: set[str]) -> list[str]: if end_word not in word_set: # no valid transformation possible return [] # Perform backtracking starting from the begin_word return backtrack(begin_word, [begin_word], end_word, word_set)
--- +++ @@ -1,3 +1,13 @@+""" +Word Ladder is a classic problem in computer science. +The problem is to transform a start word into an end word +by changing one letter at a time. +Each intermediate word must be a valid word from a given list of words. +The goal is to find a transformation sequence +from the start word to the end word. + +Wikipedia: https://en.wikipedia.org/wiki/Word_ladder +""" import string @@ -5,6 +15,34 @@ def backtrack( current_word: str, path: list[str], end_word: str, word_set: set[str] ) -> list[str]: + """ + Helper function to perform backtracking to find the transformation + from the current_word to the end_word. + + Parameters: + current_word (str): The current word in the transformation sequence. + path (list[str]): The list of transformations from begin_word to current_word. + end_word (str): The target word for transformation. + word_set (set[str]): The set of valid words for transformation. + + Returns: + list[str]: The list of transformations from begin_word to end_word. + Returns an empty list if there is no valid + transformation from current_word to end_word. + + Example: + >>> backtrack("hit", ["hit"], "cog", {"hot", "dot", "dog", "lot", "log", "cog"}) + ['hit', 'hot', 'dot', 'lot', 'log', 'cog'] + + >>> backtrack("hit", ["hit"], "cog", {"hot", "dot", "dog", "lot", "log"}) + [] + + >>> backtrack("lead", ["lead"], "gold", {"load", "goad", "gold", "lead", "lord"}) + ['lead', 'lead', 'load', 'goad', 'gold'] + + >>> backtrack("game", ["game"], "code", {"came", "cage", "code", "cade", "gave"}) + ['game', 'came', 'cade', 'code'] + """ # Base case: If the current word is the end word, return the path if current_word == end_word: @@ -28,9 +66,35 @@ def word_ladder(begin_word: str, end_word: str, word_set: set[str]) -> list[str]: + """ + Solve the Word Ladder problem using Backtracking and return + the list of transformations from begin_word to end_word. + + Parameters: + begin_word (str): The word from which the transformation starts. + end_word (str): The target word for transformation. + word_list (list[str]): The list of valid words for transformation. + + Returns: + list[str]: The list of transformations from begin_word to end_word. + Returns an empty list if there is no valid transformation. + + Example: + >>> word_ladder("hit", "cog", ["hot", "dot", "dog", "lot", "log", "cog"]) + ['hit', 'hot', 'dot', 'lot', 'log', 'cog'] + + >>> word_ladder("hit", "cog", ["hot", "dot", "dog", "lot", "log"]) + [] + + >>> word_ladder("lead", "gold", ["load", "goad", "gold", "lead", "lord"]) + ['lead', 'lead', 'load', 'goad', 'gold'] + + >>> word_ladder("game", "code", ["came", "cage", "code", "cade", "gave"]) + ['game', 'came', 'cade', 'code'] + """ if end_word not in word_set: # no valid transformation possible return [] # Perform backtracking starting from the begin_word - return backtrack(begin_word, [begin_word], end_word, word_set)+ return backtrack(begin_word, [begin_word], end_word, word_set)
https://raw.githubusercontent.com/TheAlgorithms/Python/HEAD/backtracking/word_ladder.py
Add docstrings for utility scripts
from __future__ import annotations from itertools import combinations def combination_lists(n: int, k: int) -> list[list[int]]: return [list(x) for x in combinations(range(1, n + 1), k)] def generate_all_combinations(n: int, k: int) -> list[list[int]]: if k < 0: raise ValueError("k must not be negative") if n < 0: raise ValueError("n must not be negative") result: list[list[int]] = [] create_all_state(1, n, k, [], result) return result def create_all_state( increment: int, total_number: int, level: int, current_list: list[int], total_list: list[list[int]], ) -> None: if level == 0: total_list.append(current_list[:]) return for i in range(increment, total_number - level + 2): current_list.append(i) create_all_state(i + 1, total_number, level - 1, current_list, total_list) current_list.pop() if __name__ == "__main__": from doctest import testmod testmod() print(generate_all_combinations(n=4, k=2)) tests = ((n, k) for n in range(1, 5) for k in range(1, 5)) for n, k in tests: print(n, k, generate_all_combinations(n, k) == combination_lists(n, k)) print("Benchmark:") from timeit import timeit for func in ("combination_lists", "generate_all_combinations"): print(f"{func:>25}(): {timeit(f'{func}(n=4, k = 2)', globals=globals())}")
--- +++ @@ -1,3 +1,9 @@+""" +In this problem, we want to determine all possible combinations of k +numbers out of 1 ... n. We use backtracking to solve this problem. + +Time complexity: O(C(n,k)) which is O(n choose k) = O((n!/(k! * (n - k)!))), +""" from __future__ import annotations @@ -5,10 +11,46 @@ def combination_lists(n: int, k: int) -> list[list[int]]: + """ + Generates all possible combinations of k numbers out of 1 ... n using itertools. + + >>> combination_lists(n=4, k=2) + [[1, 2], [1, 3], [1, 4], [2, 3], [2, 4], [3, 4]] + """ return [list(x) for x in combinations(range(1, n + 1), k)] def generate_all_combinations(n: int, k: int) -> list[list[int]]: + """ + Generates all possible combinations of k numbers out of 1 ... n using backtracking. + + >>> generate_all_combinations(n=4, k=2) + [[1, 2], [1, 3], [1, 4], [2, 3], [2, 4], [3, 4]] + >>> generate_all_combinations(n=0, k=0) + [[]] + >>> generate_all_combinations(n=10, k=-1) + Traceback (most recent call last): + ... + ValueError: k must not be negative + >>> generate_all_combinations(n=-1, k=10) + Traceback (most recent call last): + ... + ValueError: n must not be negative + >>> generate_all_combinations(n=5, k=4) + [[1, 2, 3, 4], [1, 2, 3, 5], [1, 2, 4, 5], [1, 3, 4, 5], [2, 3, 4, 5]] + >>> generate_all_combinations(n=3, k=3) + [[1, 2, 3]] + >>> generate_all_combinations(n=3, k=1) + [[1], [2], [3]] + >>> generate_all_combinations(n=1, k=0) + [[]] + >>> generate_all_combinations(n=1, k=1) + [[1]] + >>> from itertools import combinations + >>> all(generate_all_combinations(n, k) == combination_lists(n, k) + ... for n in range(1, 6) for k in range(1, 6)) + True + """ if k < 0: raise ValueError("k must not be negative") if n < 0: @@ -26,6 +68,28 @@ current_list: list[int], total_list: list[list[int]], ) -> None: + """ + Helper function to recursively build all combinations. + + >>> create_all_state(1, 4, 2, [], result := []) + >>> result + [[1, 2], [1, 3], [1, 4], [2, 3], [2, 4], [3, 4]] + >>> create_all_state(1, 3, 3, [], result := []) + >>> result + [[1, 2, 3]] + >>> create_all_state(2, 2, 1, [1], result := []) + >>> result + [[1, 2]] + >>> create_all_state(1, 0, 0, [], result := []) + >>> result + [[]] + >>> create_all_state(1, 4, 0, [1, 2], result := []) + >>> result + [[1, 2]] + >>> create_all_state(5, 4, 2, [1, 2], result := []) + >>> result + [] + """ if level == 0: total_list.append(current_list[:]) return @@ -49,4 +113,4 @@ from timeit import timeit for func in ("combination_lists", "generate_all_combinations"): - print(f"{func:>25}(): {timeit(f'{func}(n=4, k = 2)', globals=globals())}")+ print(f"{func:>25}(): {timeit(f'{func}(n=4, k = 2)', globals=globals())}")
https://raw.githubusercontent.com/TheAlgorithms/Python/HEAD/backtracking/all_combinations.py
Insert docstrings into my code
def valid_connection( graph: list[list[int]], next_ver: int, curr_ind: int, path: list[int] ) -> bool: # 1. Validate that path exists between current and next vertices if graph[path[curr_ind - 1]][next_ver] == 0: return False # 2. Validate that next vertex is not already in path return not any(vertex == next_ver for vertex in path) def util_hamilton_cycle(graph: list[list[int]], path: list[int], curr_ind: int) -> bool: # Base Case if curr_ind == len(graph): # return whether path exists between current and starting vertices return graph[path[curr_ind - 1]][path[0]] == 1 # Recursive Step for next_ver in range(len(graph)): if valid_connection(graph, next_ver, curr_ind, path): # Insert current vertex into path as next transition path[curr_ind] = next_ver # Validate created path if util_hamilton_cycle(graph, path, curr_ind + 1): return True # Backtrack path[curr_ind] = -1 return False def hamilton_cycle(graph: list[list[int]], start_index: int = 0) -> list[int]: # Initialize path with -1, indicating that we have not visited them yet path = [-1] * (len(graph) + 1) # initialize start and end of path with starting index path[0] = path[-1] = start_index # evaluate and if we find answer return path either return empty array return path if util_hamilton_cycle(graph, path, 1) else []
--- +++ @@ -1,8 +1,42 @@+""" +A Hamiltonian cycle (Hamiltonian circuit) is a graph cycle +through a graph that visits each node exactly once. +Determining whether such paths and cycles exist in graphs +is the 'Hamiltonian path problem', which is NP-complete. + +Wikipedia: https://en.wikipedia.org/wiki/Hamiltonian_path +""" def valid_connection( graph: list[list[int]], next_ver: int, curr_ind: int, path: list[int] ) -> bool: + """ + Checks whether it is possible to add next into path by validating 2 statements + 1. There should be path between current and next vertex + 2. Next vertex should not be in path + If both validations succeed we return True, saying that it is possible to connect + this vertices, otherwise we return False + + Case 1:Use exact graph as in main function, with initialized values + >>> graph = [[0, 1, 0, 1, 0], + ... [1, 0, 1, 1, 1], + ... [0, 1, 0, 0, 1], + ... [1, 1, 0, 0, 1], + ... [0, 1, 1, 1, 0]] + >>> path = [0, -1, -1, -1, -1, 0] + >>> curr_ind = 1 + >>> next_ver = 1 + >>> valid_connection(graph, next_ver, curr_ind, path) + True + + Case 2: Same graph, but trying to connect to node that is already in path + >>> path = [0, 1, 2, 4, -1, 0] + >>> curr_ind = 4 + >>> next_ver = 1 + >>> valid_connection(graph, next_ver, curr_ind, path) + False + """ # 1. Validate that path exists between current and next vertices if graph[path[curr_ind - 1]][next_ver] == 0: @@ -13,6 +47,47 @@ def util_hamilton_cycle(graph: list[list[int]], path: list[int], curr_ind: int) -> bool: + """ + Pseudo-Code + Base Case: + 1. Check if we visited all of vertices + 1.1 If last visited vertex has path to starting vertex return True either + return False + Recursive Step: + 2. Iterate over each vertex + Check if next vertex is valid for transiting from current vertex + 2.1 Remember next vertex as next transition + 2.2 Do recursive call and check if going to this vertex solves problem + 2.3 If next vertex leads to solution return True + 2.4 Else backtrack, delete remembered vertex + + Case 1: Use exact graph as in main function, with initialized values + >>> graph = [[0, 1, 0, 1, 0], + ... [1, 0, 1, 1, 1], + ... [0, 1, 0, 0, 1], + ... [1, 1, 0, 0, 1], + ... [0, 1, 1, 1, 0]] + >>> path = [0, -1, -1, -1, -1, 0] + >>> curr_ind = 1 + >>> util_hamilton_cycle(graph, path, curr_ind) + True + >>> path + [0, 1, 2, 4, 3, 0] + + Case 2: Use exact graph as in previous case, but in the properties taken from + middle of calculation + >>> graph = [[0, 1, 0, 1, 0], + ... [1, 0, 1, 1, 1], + ... [0, 1, 0, 0, 1], + ... [1, 1, 0, 0, 1], + ... [0, 1, 1, 1, 0]] + >>> path = [0, 1, 2, -1, -1, 0] + >>> curr_ind = 3 + >>> util_hamilton_cycle(graph, path, curr_ind) + True + >>> path + [0, 1, 2, 4, 3, 0] + """ # Base Case if curr_ind == len(graph): @@ -33,10 +108,69 @@ def hamilton_cycle(graph: list[list[int]], start_index: int = 0) -> list[int]: + r""" + Wrapper function to call subroutine called util_hamilton_cycle, + which will either return array of vertices indicating hamiltonian cycle + or an empty list indicating that hamiltonian cycle was not found. + Case 1: + Following graph consists of 5 edges. + If we look closely, we can see that there are multiple Hamiltonian cycles. + For example one result is when we iterate like: + (0)->(1)->(2)->(4)->(3)->(0) + + (0)---(1)---(2) + | / \ | + | / \ | + | / \ | + |/ \| + (3)---------(4) + >>> graph = [[0, 1, 0, 1, 0], + ... [1, 0, 1, 1, 1], + ... [0, 1, 0, 0, 1], + ... [1, 1, 0, 0, 1], + ... [0, 1, 1, 1, 0]] + >>> hamilton_cycle(graph) + [0, 1, 2, 4, 3, 0] + + Case 2: + Same Graph as it was in Case 1, changed starting index from default to 3 + + (0)---(1)---(2) + | / \ | + | / \ | + | / \ | + |/ \| + (3)---------(4) + >>> graph = [[0, 1, 0, 1, 0], + ... [1, 0, 1, 1, 1], + ... [0, 1, 0, 0, 1], + ... [1, 1, 0, 0, 1], + ... [0, 1, 1, 1, 0]] + >>> hamilton_cycle(graph, 3) + [3, 0, 1, 2, 4, 3] + + Case 3: + Following Graph is exactly what it was before, but edge 3-4 is removed. + Result is that there is no Hamiltonian Cycle anymore. + + (0)---(1)---(2) + | / \ | + | / \ | + | / \ | + |/ \| + (3) (4) + >>> graph = [[0, 1, 0, 1, 0], + ... [1, 0, 1, 1, 1], + ... [0, 1, 0, 0, 1], + ... [1, 1, 0, 0, 0], + ... [0, 1, 1, 0, 0]] + >>> hamilton_cycle(graph,4) + [] + """ # Initialize path with -1, indicating that we have not visited them yet path = [-1] * (len(graph) + 1) # initialize start and end of path with starting index path[0] = path[-1] = start_index # evaluate and if we find answer return path either return empty array - return path if util_hamilton_cycle(graph, path, 1) else []+ return path if util_hamilton_cycle(graph, path, 1) else []
https://raw.githubusercontent.com/TheAlgorithms/Python/HEAD/backtracking/hamiltonian_cycle.py
Create documentation for each function signature
from __future__ import annotations import math def minimax( depth: int, node_index: int, is_max: bool, scores: list[int], height: float ) -> int: if depth < 0: raise ValueError("Depth cannot be less than 0") if len(scores) == 0: raise ValueError("Scores cannot be empty") # Base case: If the current depth equals the height of the tree, # return the score of the current node. if depth == height: return scores[node_index] # If it's the maximizer's turn, choose the maximum score # between the two possible moves. if is_max: return max( minimax(depth + 1, node_index * 2, False, scores, height), minimax(depth + 1, node_index * 2 + 1, False, scores, height), ) # If it's the minimizer's turn, choose the minimum score # between the two possible moves. return min( minimax(depth + 1, node_index * 2, True, scores, height), minimax(depth + 1, node_index * 2 + 1, True, scores, height), ) def main() -> None: # Sample scores and height calculation scores = [90, 23, 6, 33, 21, 65, 123, 34423] height = math.log(len(scores), 2) # Calculate and print the optimal value using the minimax algorithm print("Optimal value : ", end="") print(minimax(0, 0, True, scores, height)) if __name__ == "__main__": import doctest doctest.testmod() main()
--- +++ @@ -1,3 +1,12 @@+""" +Minimax helps to achieve maximum score in a game by checking all possible moves +depth is current depth in game tree. + +nodeIndex is index of current node in scores[]. +if move is of maximizer return true else false +leaves of game tree is stored in scores[] +height is maximum height of Game tree +""" from __future__ import annotations @@ -7,6 +16,41 @@ def minimax( depth: int, node_index: int, is_max: bool, scores: list[int], height: float ) -> int: + """ + This function implements the minimax algorithm, which helps achieve the optimal + score for a player in a two-player game by checking all possible moves. + If the player is the maximizer, then the score is maximized. + If the player is the minimizer, then the score is minimized. + + Parameters: + - depth: Current depth in the game tree. + - node_index: Index of the current node in the scores list. + - is_max: A boolean indicating whether the current move + is for the maximizer (True) or minimizer (False). + - scores: A list containing the scores of the leaves of the game tree. + - height: The maximum height of the game tree. + + Returns: + - An integer representing the optimal score for the current player. + + >>> import math + >>> scores = [90, 23, 6, 33, 21, 65, 123, 34423] + >>> height = math.log(len(scores), 2) + >>> minimax(0, 0, True, scores, height) + 65 + >>> minimax(-1, 0, True, scores, height) + Traceback (most recent call last): + ... + ValueError: Depth cannot be less than 0 + >>> minimax(0, 0, True, [], 2) + Traceback (most recent call last): + ... + ValueError: Scores cannot be empty + >>> scores = [3, 5, 2, 9, 12, 5, 23, 23] + >>> height = math.log(len(scores), 2) + >>> minimax(0, 0, True, scores, height) + 12 + """ if depth < 0: raise ValueError("Depth cannot be less than 0") @@ -48,4 +92,4 @@ import doctest doctest.testmod() - main()+ main()
https://raw.githubusercontent.com/TheAlgorithms/Python/HEAD/backtracking/minimax.py
Document my Python code with docstrings
from __future__ import annotations def depth_first_search( possible_board: list[int], diagonal_right_collisions: list[int], diagonal_left_collisions: list[int], boards: list[list[str]], n: int, ) -> None: # Get next row in the current board (possible_board) to fill it with a queen row = len(possible_board) # If row is equal to the size of the board it means there are a queen in each row in # the current board (possible_board) if row == n: # We convert the variable possible_board that looks like this: [1, 3, 0, 2] to # this: ['. Q . . ', '. . . Q ', 'Q . . . ', '. . Q . '] boards.append([". " * i + "Q " + ". " * (n - 1 - i) for i in possible_board]) return # We iterate each column in the row to find all possible results in each row for col in range(n): # We apply that we learned previously. First we check that in the current board # (possible_board) there are not other same value because if there is it means # that there are a collision in vertical. Then we apply the two formulas we # learned before: # # 45º: y - x = b or 45: row - col = b # 135º: y + x = b or row + col = b. # # And we verify if the results of this two formulas not exist in their variables # respectively. (diagonal_right_collisions, diagonal_left_collisions) # # If any or these are True it means there is a collision so we continue to the # next value in the for loop. if ( col in possible_board or row - col in diagonal_right_collisions or row + col in diagonal_left_collisions ): continue # If it is False we call dfs function again and we update the inputs depth_first_search( [*possible_board, col], [*diagonal_right_collisions, row - col], [*diagonal_left_collisions, row + col], boards, n, ) def n_queens_solution(n: int) -> None: boards: list[list[str]] = [] depth_first_search([], [], [], boards, n) # Print all the boards for board in boards: for column in board: print(column) print("") print(len(boards), "solutions were found.") if __name__ == "__main__": import doctest doctest.testmod() n_queens_solution(4)
--- +++ @@ -1,3 +1,80 @@+r""" +Problem: + +The n queens problem is: placing N queens on a N * N chess board such that no queen +can attack any other queens placed on that chess board. This means that one queen +cannot have any other queen on its horizontal, vertical and diagonal lines. + +Solution: + +To solve this problem we will use simple math. First we know the queen can move in all +the possible ways, we can simplify it in this: vertical, horizontal, diagonal left and + diagonal right. + +We can visualize it like this: + +left diagonal = \ +right diagonal = / + +On a chessboard vertical movement could be the rows and horizontal movement could be +the columns. + +In programming we can use an array, and in this array each index could be the rows and +each value in the array could be the column. For example: + + . Q . . We have this chessboard with one queen in each column and each queen + . . . Q can't attack to each other. + Q . . . The array for this example would look like this: [1, 3, 0, 2] + . . Q . + +So if we use an array and we verify that each value in the array is different to each +other we know that at least the queens can't attack each other in horizontal and +vertical. + +At this point we have it halfway completed and we will treat the chessboard as a +Cartesian plane. Hereinafter we are going to remember basic math, so in the school we +learned this formula: + + Slope of a line: + + y2 - y1 + m = ---------- + x2 - x1 + +This formula allow us to get the slope. For the angles 45º (right diagonal) and 135º +(left diagonal) this formula gives us m = 1, and m = -1 respectively. + +See:: +https://www.enotes.com/homework-help/write-equation-line-that-hits-origin-45-degree-1474860 + +Then we have this other formula: + +Slope intercept: + +y = mx + b + +b is where the line crosses the Y axis (to get more information see: +https://www.mathsisfun.com/y_intercept.html), if we change the formula to solve for b +we would have: + +y - mx = b + +And since we already have the m values for the angles 45º and 135º, this formula would +look like this: + +45º: y - (1)x = b +45º: y - x = b + +135º: y - (-1)x = b +135º: y + x = b + +y = row +x = column + +Applying these two formulas we can check if a queen in some position is being attacked +for another one or vice versa. + +""" from __future__ import annotations @@ -9,6 +86,14 @@ boards: list[list[str]], n: int, ) -> None: + """ + >>> boards = [] + >>> depth_first_search([], [], [], boards, 4) + >>> for board in boards: + ... print(board) + ['. Q . . ', '. . . Q ', 'Q . . . ', '. . Q . '] + ['. . Q . ', 'Q . . . ', '. . . Q ', '. Q . . '] + """ # Get next row in the current board (possible_board) to fill it with a queen row = len(possible_board) @@ -70,4 +155,4 @@ import doctest doctest.testmod() - n_queens_solution(4)+ n_queens_solution(4)
https://raw.githubusercontent.com/TheAlgorithms/Python/HEAD/backtracking/n_queens_math.py
Turn comments into proper docstrings
from __future__ import annotations def solve_maze( maze: list[list[int]], source_row: int, source_column: int, destination_row: int, destination_column: int, ) -> list[list[int]]: size = len(maze) # Check if source and destination coordinates are Invalid. if not (0 <= source_row <= size - 1 and 0 <= source_column <= size - 1) or ( not (0 <= destination_row <= size - 1 and 0 <= destination_column <= size - 1) ): raise ValueError("Invalid source or destination coordinates") # We need to create solution object to save path. solutions = [[1 for _ in range(size)] for _ in range(size)] solved = run_maze( maze, source_row, source_column, destination_row, destination_column, solutions ) if solved: return solutions else: raise ValueError("No solution exists!") def run_maze( maze: list[list[int]], i: int, j: int, destination_row: int, destination_column: int, solutions: list[list[int]], ) -> bool: size = len(maze) # Final check point. if i == destination_row and j == destination_column and maze[i][j] == 0: solutions[i][j] = 0 return True lower_flag = (not i < 0) and (not j < 0) # Check lower bounds upper_flag = (i < size) and (j < size) # Check upper bounds if lower_flag and upper_flag: # check for already visited and block points. block_flag = (solutions[i][j]) and (not maze[i][j]) if block_flag: # check visited solutions[i][j] = 0 # check for directions if ( run_maze(maze, i + 1, j, destination_row, destination_column, solutions) or run_maze( maze, i, j + 1, destination_row, destination_column, solutions ) or run_maze( maze, i - 1, j, destination_row, destination_column, solutions ) or run_maze( maze, i, j - 1, destination_row, destination_column, solutions ) ): return True solutions[i][j] = 1 return False return False if __name__ == "__main__": import doctest doctest.testmod(optionflags=doctest.NORMALIZE_WHITESPACE)
--- +++ @@ -8,6 +8,117 @@ destination_row: int, destination_column: int, ) -> list[list[int]]: + """ + This method solves the "rat in maze" problem. + Parameters : + - maze: A two dimensional matrix of zeros and ones. + - source_row: The row index of the starting point. + - source_column: The column index of the starting point. + - destination_row: The row index of the destination point. + - destination_column: The column index of the destination point. + Returns: + - solution: A 2D matrix representing the solution path if it exists. + Raises: + - ValueError: If no solution exists or if the source or + destination coordinates are invalid. + Description: + This method navigates through a maze represented as an n by n matrix, + starting from a specified source cell and + aiming to reach a destination cell. + The maze consists of walls (1s) and open paths (0s). + By providing custom row and column values, the source and destination + cells can be adjusted. + >>> maze = [[0, 1, 0, 1, 1], + ... [0, 0, 0, 0, 0], + ... [1, 0, 1, 0, 1], + ... [0, 0, 1, 0, 0], + ... [1, 0, 0, 1, 0]] + >>> solve_maze(maze,0,0,len(maze)-1,len(maze)-1) # doctest: +NORMALIZE_WHITESPACE + [[0, 1, 1, 1, 1], + [0, 0, 0, 0, 1], + [1, 1, 1, 0, 1], + [1, 1, 1, 0, 0], + [1, 1, 1, 1, 0]] + + Note: + In the output maze, the zeros (0s) represent one of the possible + paths from the source to the destination. + + >>> maze = [[0, 1, 0, 1, 1], + ... [0, 0, 0, 0, 0], + ... [0, 0, 0, 0, 1], + ... [0, 0, 0, 0, 0], + ... [0, 0, 0, 0, 0]] + >>> solve_maze(maze,0,0,len(maze)-1,len(maze)-1) # doctest: +NORMALIZE_WHITESPACE + [[0, 1, 1, 1, 1], + [0, 1, 1, 1, 1], + [0, 1, 1, 1, 1], + [0, 1, 1, 1, 1], + [0, 0, 0, 0, 0]] + + >>> maze = [[0, 0, 0], + ... [0, 1, 0], + ... [1, 0, 0]] + >>> solve_maze(maze,0,0,len(maze)-1,len(maze)-1) # doctest: +NORMALIZE_WHITESPACE + [[0, 0, 0], + [1, 1, 0], + [1, 1, 0]] + + >>> maze = [[1, 0, 0], + ... [0, 1, 0], + ... [1, 0, 0]] + >>> solve_maze(maze,0,1,len(maze)-1,len(maze)-1) # doctest: +NORMALIZE_WHITESPACE + [[1, 0, 0], + [1, 1, 0], + [1, 1, 0]] + + >>> maze = [[1, 1, 0, 0, 1, 0, 0, 1], + ... [1, 0, 1, 0, 0, 1, 1, 1], + ... [0, 1, 0, 1, 0, 0, 1, 0], + ... [1, 1, 1, 0, 0, 1, 0, 1], + ... [0, 1, 0, 0, 1, 0, 1, 1], + ... [0, 0, 0, 1, 1, 1, 0, 1], + ... [0, 1, 0, 1, 0, 1, 1, 1], + ... [1, 1, 0, 0, 0, 0, 0, 1]] + >>> solve_maze(maze,0,2,len(maze)-1,2) # doctest: +NORMALIZE_WHITESPACE + [[1, 1, 0, 0, 1, 1, 1, 1], + [1, 1, 1, 0, 0, 1, 1, 1], + [1, 1, 1, 1, 0, 1, 1, 1], + [1, 1, 1, 0, 0, 1, 1, 1], + [1, 1, 0, 0, 1, 1, 1, 1], + [1, 1, 0, 1, 1, 1, 1, 1], + [1, 1, 0, 1, 1, 1, 1, 1], + [1, 1, 0, 1, 1, 1, 1, 1]] + >>> maze = [[1, 0, 0], + ... [0, 1, 1], + ... [1, 0, 1]] + >>> solve_maze(maze,0,1,len(maze)-1,len(maze)-1) + Traceback (most recent call last): + ... + ValueError: No solution exists! + + >>> maze = [[0, 0], + ... [1, 1]] + >>> solve_maze(maze,0,0,len(maze)-1,len(maze)-1) + Traceback (most recent call last): + ... + ValueError: No solution exists! + + >>> maze = [[0, 1], + ... [1, 0]] + >>> solve_maze(maze,2,0,len(maze)-1,len(maze)-1) + Traceback (most recent call last): + ... + ValueError: Invalid source or destination coordinates + + >>> maze = [[1, 0, 0], + ... [0, 1, 0], + ... [1, 0, 0]] + >>> solve_maze(maze,0,1,len(maze),len(maze)-1) + Traceback (most recent call last): + ... + ValueError: Invalid source or destination coordinates + """ size = len(maze) # Check if source and destination coordinates are Invalid. if not (0 <= source_row <= size - 1 and 0 <= source_column <= size - 1) or ( @@ -33,6 +144,17 @@ destination_column: int, solutions: list[list[int]], ) -> bool: + """ + This method is recursive starting from (i, j) and going in one of four directions: + up, down, left, right. + If a path is found to destination it returns True otherwise it returns False. + Parameters + maze: A two dimensional matrix of zeros and ones. + i, j : coordinates of matrix + solutions: A two dimensional matrix of solutions. + Returns: + Boolean if path is found True, Otherwise False. + """ size = len(maze) # Final check point. if i == destination_row and j == destination_column and maze[i][j] == 0: @@ -72,4 +194,4 @@ if __name__ == "__main__": import doctest - doctest.testmod(optionflags=doctest.NORMALIZE_WHITESPACE)+ doctest.testmod(optionflags=doctest.NORMALIZE_WHITESPACE)
https://raw.githubusercontent.com/TheAlgorithms/Python/HEAD/backtracking/rat_in_maze.py
Please document this code using docstrings
def get_point_key(len_board: int, len_board_column: int, row: int, column: int) -> int: return len_board * len_board_column * row + column def exits_word( board: list[list[str]], word: str, row: int, column: int, word_index: int, visited_points_set: set[int], ) -> bool: if board[row][column] != word[word_index]: return False if word_index == len(word) - 1: return True traverts_directions = [(0, 1), (0, -1), (-1, 0), (1, 0)] len_board = len(board) len_board_column = len(board[0]) for direction in traverts_directions: next_i = row + direction[0] next_j = column + direction[1] if not (0 <= next_i < len_board and 0 <= next_j < len_board_column): continue key = get_point_key(len_board, len_board_column, next_i, next_j) if key in visited_points_set: continue visited_points_set.add(key) if exits_word(board, word, next_i, next_j, word_index + 1, visited_points_set): return True visited_points_set.remove(key) return False def word_exists(board: list[list[str]], word: str) -> bool: # Validate board board_error_message = ( "The board should be a non empty matrix of single chars strings." ) len_board = len(board) if not isinstance(board, list) or len(board) == 0: raise ValueError(board_error_message) for row in board: if not isinstance(row, list) or len(row) == 0: raise ValueError(board_error_message) for item in row: if not isinstance(item, str) or len(item) != 1: raise ValueError(board_error_message) # Validate word if not isinstance(word, str) or len(word) == 0: raise ValueError( "The word parameter should be a string of length greater than 0." ) len_board_column = len(board[0]) for i in range(len_board): for j in range(len_board_column): if exits_word( board, word, i, j, 0, {get_point_key(len_board, len_board_column, i, j)} ): return True return False if __name__ == "__main__": import doctest doctest.testmod()
--- +++ @@ -1,6 +1,45 @@+""" +Author : Alexander Pantyukhin +Date : November 24, 2022 + +Task: +Given an m x n grid of characters board and a string word, +return true if word exists in the grid. + +The word can be constructed from letters of sequentially adjacent cells, +where adjacent cells are horizontally or vertically neighboring. +The same letter cell may not be used more than once. + +Example: + +Matrix: +--------- +|A|B|C|E| +|S|F|C|S| +|A|D|E|E| +--------- + +Word: +"ABCCED" + +Result: +True + +Implementation notes: Use backtracking approach. +At each point, check all neighbors to try to find the next letter of the word. + +leetcode: https://leetcode.com/problems/word-search/ + +""" def get_point_key(len_board: int, len_board_column: int, row: int, column: int) -> int: + """ + Returns the hash key of matrix indexes. + + >>> get_point_key(10, 20, 1, 0) + 200 + """ return len_board * len_board_column * row + column @@ -13,6 +52,13 @@ word_index: int, visited_points_set: set[int], ) -> bool: + """ + Return True if it's possible to search the word suffix + starting from the word_index. + + >>> exits_word([["A"]], "B", 0, 0, 0, set()) + False + """ if board[row][column] != word[word_index]: return False @@ -43,6 +89,38 @@ def word_exists(board: list[list[str]], word: str) -> bool: + """ + >>> word_exists([["A","B","C","E"],["S","F","C","S"],["A","D","E","E"]], "ABCCED") + True + >>> word_exists([["A","B","C","E"],["S","F","C","S"],["A","D","E","E"]], "SEE") + True + >>> word_exists([["A","B","C","E"],["S","F","C","S"],["A","D","E","E"]], "ABCB") + False + >>> word_exists([["A"]], "A") + True + >>> word_exists([["B", "A", "A"], ["A", "A", "A"], ["A", "B", "A"]], "ABB") + False + >>> word_exists([["A"]], 123) + Traceback (most recent call last): + ... + ValueError: The word parameter should be a string of length greater than 0. + >>> word_exists([["A"]], "") + Traceback (most recent call last): + ... + ValueError: The word parameter should be a string of length greater than 0. + >>> word_exists([[]], "AB") + Traceback (most recent call last): + ... + ValueError: The board should be a non empty matrix of single chars strings. + >>> word_exists([], "AB") + Traceback (most recent call last): + ... + ValueError: The board should be a non empty matrix of single chars strings. + >>> word_exists([["A"], [21]], "AB") + Traceback (most recent call last): + ... + ValueError: The board should be a non empty matrix of single chars strings. + """ # Validate board board_error_message = ( @@ -81,4 +159,4 @@ if __name__ == "__main__": import doctest - doctest.testmod()+ doctest.testmod()
https://raw.githubusercontent.com/TheAlgorithms/Python/HEAD/backtracking/word_search.py
Add docstrings to my Python code
# Knight Tour Intro: https://www.youtube.com/watch?v=ab_dY3dZFHM from __future__ import annotations def get_valid_pos(position: tuple[int, int], n: int) -> list[tuple[int, int]]: y, x = position positions = [ (y + 1, x + 2), (y - 1, x + 2), (y + 1, x - 2), (y - 1, x - 2), (y + 2, x + 1), (y + 2, x - 1), (y - 2, x + 1), (y - 2, x - 1), ] permissible_positions = [] for inner_position in positions: y_test, x_test = inner_position if 0 <= y_test < n and 0 <= x_test < n: permissible_positions.append(inner_position) return permissible_positions def is_complete(board: list[list[int]]) -> bool: return not any(elem == 0 for row in board for elem in row) def open_knight_tour_helper( board: list[list[int]], pos: tuple[int, int], curr: int ) -> bool: if is_complete(board): return True for position in get_valid_pos(pos, len(board)): y, x = position if board[y][x] == 0: board[y][x] = curr + 1 if open_knight_tour_helper(board, position, curr + 1): return True board[y][x] = 0 return False def open_knight_tour(n: int) -> list[list[int]]: board = [[0 for i in range(n)] for j in range(n)] for i in range(n): for j in range(n): board[i][j] = 1 if open_knight_tour_helper(board, (i, j), 1): return board board[i][j] = 0 msg = f"Open Knight Tour cannot be performed on a board of size {n}" raise ValueError(msg) if __name__ == "__main__": import doctest doctest.testmod()
--- +++ @@ -4,6 +4,12 @@ def get_valid_pos(position: tuple[int, int], n: int) -> list[tuple[int, int]]: + """ + Find all the valid positions a knight can move to from the current position. + + >>> get_valid_pos((1, 3), 4) + [(2, 1), (0, 1), (3, 2)] + """ y, x = position positions = [ @@ -27,6 +33,15 @@ def is_complete(board: list[list[int]]) -> bool: + """ + Check if the board (matrix) has been completely filled with non-zero values. + + >>> is_complete([[1]]) + True + + >>> is_complete([[1, 2], [3, 0]]) + False + """ return not any(elem == 0 for row in board for elem in row) @@ -34,6 +49,9 @@ def open_knight_tour_helper( board: list[list[int]], pos: tuple[int, int], curr: int ) -> bool: + """ + Helper function to solve knight tour problem. + """ if is_complete(board): return True @@ -51,6 +69,18 @@ def open_knight_tour(n: int) -> list[list[int]]: + """ + Find the solution for the knight tour problem for a board of size n. Raises + ValueError if the tour cannot be performed for the given size. + + >>> open_knight_tour(1) + [[1]] + + >>> open_knight_tour(2) + Traceback (most recent call last): + ... + ValueError: Open Knight Tour cannot be performed on a board of size 2 + """ board = [[0 for i in range(n)] for j in range(n)] @@ -68,4 +98,4 @@ if __name__ == "__main__": import doctest - doctest.testmod()+ doctest.testmod()
https://raw.githubusercontent.com/TheAlgorithms/Python/HEAD/backtracking/knight_tour.py
Generate helpful docstrings for debugging
def backtrack(input_string: str, word_dict: set[str], start: int) -> bool: # Base case: if the starting index has reached the end of the string if start == len(input_string): return True # Try every possible substring from 'start' to 'end' for end in range(start + 1, len(input_string) + 1): if input_string[start:end] in word_dict and backtrack( input_string, word_dict, end ): return True return False def word_break(input_string: str, word_dict: set[str]) -> bool: return backtrack(input_string, word_dict, 0)
--- +++ @@ -1,6 +1,35 @@+""" +Word Break Problem is a well-known problem in computer science. +Given a string and a dictionary of words, the task is to determine if +the string can be segmented into a sequence of one or more dictionary words. + +Wikipedia: https://en.wikipedia.org/wiki/Word_break_problem +""" def backtrack(input_string: str, word_dict: set[str], start: int) -> bool: + """ + Helper function that uses backtracking to determine if a valid + word segmentation is possible starting from index 'start'. + + Parameters: + input_string (str): The input string to be segmented. + word_dict (set[str]): A set of valid dictionary words. + start (int): The starting index of the substring to be checked. + + Returns: + bool: True if a valid segmentation is possible, otherwise False. + + Example: + >>> backtrack("leetcode", {"leet", "code"}, 0) + True + + >>> backtrack("applepenapple", {"apple", "pen"}, 0) + True + + >>> backtrack("catsandog", {"cats", "dog", "sand", "and", "cat"}, 0) + False + """ # Base case: if the starting index has reached the end of the string if start == len(input_string): @@ -17,5 +46,29 @@ def word_break(input_string: str, word_dict: set[str]) -> bool: + """ + Determines if the input string can be segmented into a sequence of + valid dictionary words using backtracking. - return backtrack(input_string, word_dict, 0)+ Parameters: + input_string (str): The input string to segment. + word_dict (set[str]): The set of valid words. + + Returns: + bool: True if the string can be segmented into valid words, otherwise False. + + Example: + >>> word_break("leetcode", {"leet", "code"}) + True + + >>> word_break("applepenapple", {"apple", "pen"}) + True + + >>> word_break("catsandog", {"cats", "dog", "sand", "and", "cat"}) + False + + >>> word_break("applepenapple", {}) + False + """ + + return backtrack(input_string, word_dict, 0)
https://raw.githubusercontent.com/TheAlgorithms/Python/HEAD/backtracking/word_break.py
Write docstrings including parameters and return values
from __future__ import annotations Matrix = list[list[int]] # assigning initial values to the grid initial_grid: Matrix = [ [3, 0, 6, 5, 0, 8, 4, 0, 0], [5, 2, 0, 0, 0, 0, 0, 0, 0], [0, 8, 7, 0, 0, 0, 0, 3, 1], [0, 0, 3, 0, 1, 0, 0, 8, 0], [9, 0, 0, 8, 6, 3, 0, 0, 5], [0, 5, 0, 0, 9, 0, 6, 0, 0], [1, 3, 0, 0, 0, 0, 2, 5, 0], [0, 0, 0, 0, 0, 0, 0, 7, 4], [0, 0, 5, 2, 0, 6, 3, 0, 0], ] # a grid with no solution no_solution: Matrix = [ [5, 0, 6, 5, 0, 8, 4, 0, 3], [5, 2, 0, 0, 0, 0, 0, 0, 2], [1, 8, 7, 0, 0, 0, 0, 3, 1], [0, 0, 3, 0, 1, 0, 0, 8, 0], [9, 0, 0, 8, 6, 3, 0, 0, 5], [0, 5, 0, 0, 9, 0, 6, 0, 0], [1, 3, 0, 0, 0, 0, 2, 5, 0], [0, 0, 0, 0, 0, 0, 0, 7, 4], [0, 0, 5, 2, 0, 6, 3, 0, 0], ] def is_safe(grid: Matrix, row: int, column: int, n: int) -> bool: for i in range(9): if n in {grid[row][i], grid[i][column]}: return False for i in range(3): for j in range(3): if grid[(row - row % 3) + i][(column - column % 3) + j] == n: return False return True def find_empty_location(grid: Matrix) -> tuple[int, int] | None: for i in range(9): for j in range(9): if grid[i][j] == 0: return i, j return None def sudoku(grid: Matrix) -> Matrix | None: if location := find_empty_location(grid): row, column = location else: # If the location is ``None``, then the grid is solved. return grid for digit in range(1, 10): if is_safe(grid, row, column, digit): grid[row][column] = digit if sudoku(grid) is not None: return grid grid[row][column] = 0 return None def print_solution(grid: Matrix) -> None: for row in grid: for cell in row: print(cell, end=" ") print() if __name__ == "__main__": # make a copy of grid so that you can compare with the unmodified grid for example_grid in (initial_grid, no_solution): print("\nExample grid:\n" + "=" * 20) print_solution(example_grid) print("\nExample grid solution:") solution = sudoku(example_grid) if solution is not None: print_solution(solution) else: print("Cannot find a solution.")
--- +++ @@ -1,3 +1,14 @@+""" +Given a partially filled 9x9 2D array, the objective is to fill a 9x9 +square grid with digits numbered 1 to 9, so that every row, column, and +and each of the nine 3x3 sub-grids contains all of the digits. + +This can be solved using Backtracking and is similar to n-queens. +We check to see if a cell is safe or not and recursively call the +function on the next column to see if it returns True. if yes, we +have solved the puzzle. else, we backtrack and place another number +in that cell and repeat this process. +""" from __future__ import annotations @@ -31,6 +42,12 @@ def is_safe(grid: Matrix, row: int, column: int, n: int) -> bool: + """ + This function checks the grid to see if each row, + column, and the 3x3 subgrids contain the digit 'n'. + It returns False if it is not 'safe' (a duplicate digit + is found) else returns True if it is 'safe' + """ for i in range(9): if n in {grid[row][i], grid[i][column]}: return False @@ -44,6 +61,10 @@ def find_empty_location(grid: Matrix) -> tuple[int, int] | None: + """ + This function finds an empty location so that we can assign a number + for that particular row and column. + """ for i in range(9): for j in range(9): if grid[i][j] == 0: @@ -52,6 +73,24 @@ def sudoku(grid: Matrix) -> Matrix | None: + """ + Takes a partially filled-in grid and attempts to assign values to + all unassigned locations in such a way to meet the requirements + for Sudoku solution (non-duplication across rows, columns, and boxes) + + >>> sudoku(initial_grid) # doctest: +NORMALIZE_WHITESPACE + [[3, 1, 6, 5, 7, 8, 4, 9, 2], + [5, 2, 9, 1, 3, 4, 7, 6, 8], + [4, 8, 7, 6, 2, 9, 5, 3, 1], + [2, 6, 3, 4, 1, 5, 9, 8, 7], + [9, 7, 4, 8, 6, 3, 1, 2, 5], + [8, 5, 1, 7, 9, 2, 6, 4, 3], + [1, 3, 8, 9, 4, 7, 2, 5, 6], + [6, 9, 2, 3, 5, 1, 8, 7, 4], + [7, 4, 5, 2, 8, 6, 3, 1, 9]] + >>> sudoku(no_solution) is None + True + """ if location := find_empty_location(grid): row, column = location else: @@ -71,6 +110,10 @@ def print_solution(grid: Matrix) -> None: + """ + A function to print the solution in the form + of a 9x9 grid + """ for row in grid: for cell in row: print(cell, end=" ") @@ -87,4 +130,4 @@ if solution is not None: print_solution(solution) else: - print("Cannot find a solution.")+ print("Cannot find a solution.")
https://raw.githubusercontent.com/TheAlgorithms/Python/HEAD/backtracking/sudoku.py
Add docstrings to make code maintainable
def valid_coloring( neighbours: list[int], colored_vertices: list[int], color: int ) -> bool: # Does any neighbour not satisfy the constraints return not any( neighbour == 1 and colored_vertices[i] == color for i, neighbour in enumerate(neighbours) ) def util_color( graph: list[list[int]], max_colors: int, colored_vertices: list[int], index: int ) -> bool: # Base Case if index == len(graph): return True # Recursive Step for i in range(max_colors): if valid_coloring(graph[index], colored_vertices, i): # Color current vertex colored_vertices[index] = i # Validate coloring if util_color(graph, max_colors, colored_vertices, index + 1): return True # Backtrack colored_vertices[index] = -1 return False def color(graph: list[list[int]], max_colors: int) -> list[int]: colored_vertices = [-1] * len(graph) if util_color(graph, max_colors, colored_vertices, 0): return colored_vertices return []
--- +++ @@ -1,8 +1,31 @@+""" +Graph Coloring also called "m coloring problem" +consists of coloring a given graph with at most m colors +such that no adjacent vertices are assigned the same color + +Wikipedia: https://en.wikipedia.org/wiki/Graph_coloring +""" def valid_coloring( neighbours: list[int], colored_vertices: list[int], color: int ) -> bool: + """ + For each neighbour check if the coloring constraint is satisfied + If any of the neighbours fail the constraint return False + If all neighbours validate the constraint return True + + >>> neighbours = [0,1,0,1,0] + >>> colored_vertices = [0, 2, 1, 2, 0] + + >>> color = 1 + >>> valid_coloring(neighbours, colored_vertices, color) + True + + >>> color = 2 + >>> valid_coloring(neighbours, colored_vertices, color) + False + """ # Does any neighbour not satisfy the constraints return not any( neighbour == 1 and colored_vertices[i] == color @@ -13,6 +36,37 @@ def util_color( graph: list[list[int]], max_colors: int, colored_vertices: list[int], index: int ) -> bool: + """ + Pseudo-Code + + Base Case: + 1. Check if coloring is complete + 1.1 If complete return True (meaning that we successfully colored the graph) + + Recursive Step: + 2. Iterates over each color: + Check if the current coloring is valid: + 2.1. Color given vertex + 2.2. Do recursive call, check if this coloring leads to a solution + 2.4. if current coloring leads to a solution return + 2.5. Uncolor given vertex + + >>> graph = [[0, 1, 0, 0, 0], + ... [1, 0, 1, 0, 1], + ... [0, 1, 0, 1, 0], + ... [0, 1, 1, 0, 0], + ... [0, 1, 0, 0, 0]] + >>> max_colors = 3 + >>> colored_vertices = [0, 1, 0, 0, 0] + >>> index = 3 + + >>> util_color(graph, max_colors, colored_vertices, index) + True + + >>> max_colors = 2 + >>> util_color(graph, max_colors, colored_vertices, index) + False + """ # Base Case if index == len(graph): @@ -32,9 +86,36 @@ def color(graph: list[list[int]], max_colors: int) -> list[int]: + """ + Wrapper function to call subroutine called util_color + which will either return True or False. + If True is returned colored_vertices list is filled with correct colorings + + >>> graph = [[0, 1, 0, 0, 0], + ... [1, 0, 1, 0, 1], + ... [0, 1, 0, 1, 0], + ... [0, 1, 1, 0, 0], + ... [0, 1, 0, 0, 0]] + + >>> max_colors = 3 + >>> color(graph, max_colors) + [0, 1, 0, 2, 0] + + >>> max_colors = 2 + >>> color(graph, max_colors) + [] + >>> color([], 2) # empty graph + [] + >>> color([[0]], 1) # single node, 1 color + [0] + >>> color([[0, 1], [1, 0]], 1) # 2 nodes, 1 color (impossible) + [] + >>> color([[0, 1], [1, 0]], 2) # 2 nodes, 2 colors (possible) + [0, 1] + """ colored_vertices = [-1] * len(graph) if util_color(graph, max_colors, colored_vertices, 0): return colored_vertices - return []+ return []
https://raw.githubusercontent.com/TheAlgorithms/Python/HEAD/backtracking/coloring.py
Generate documentation strings for clarity
def backtrack( candidates: list, path: list, answer: list, target: int, previous_index: int ) -> None: if target == 0: answer.append(path.copy()) else: for index in range(previous_index, len(candidates)): if target >= candidates[index]: path.append(candidates[index]) backtrack(candidates, path, answer, target - candidates[index], index) path.pop(len(path) - 1) def combination_sum(candidates: list, target: int) -> list: if not candidates: raise ValueError("Candidates list should not be empty") if any(x < 0 for x in candidates): raise ValueError("All elements in candidates must be non-negative") path = [] # type: list[int] answer = [] # type: list[int] backtrack(candidates, path, answer, target, 0) return answer def main() -> None: print(combination_sum([-8, 2.3, 0], 1)) if __name__ == "__main__": import doctest doctest.testmod() main()
--- +++ @@ -1,8 +1,33 @@+""" +In the Combination Sum problem, we are given a list consisting of distinct integers. +We need to find all the combinations whose sum equals to target given. +We can use an element more than one. + +Time complexity(Average Case): O(n!) + +Constraints: +1 <= candidates.length <= 30 +2 <= candidates[i] <= 40 +All elements of candidates are distinct. +1 <= target <= 40 +""" def backtrack( candidates: list, path: list, answer: list, target: int, previous_index: int ) -> None: + """ + A recursive function that searches for possible combinations. Backtracks in case + of a bigger current combination value than the target value. + + Parameters + ---------- + previous_index: Last index from the previous search + target: The value we need to obtain by summing our integers in the path list. + answer: A list of possible combinations + path: Current combination + candidates: A list of integers we can use. + """ if target == 0: answer.append(path.copy()) else: @@ -14,6 +39,20 @@ def combination_sum(candidates: list, target: int) -> list: + """ + >>> combination_sum([2, 3, 5], 8) + [[2, 2, 2, 2], [2, 3, 3], [3, 5]] + >>> combination_sum([2, 3, 6, 7], 7) + [[2, 2, 3], [7]] + >>> combination_sum([-8, 2.3, 0], 1) + Traceback (most recent call last): + ... + ValueError: All elements in candidates must be non-negative + >>> combination_sum([], 1) + Traceback (most recent call last): + ... + ValueError: Candidates list should not be empty + """ if not candidates: raise ValueError("Candidates list should not be empty") @@ -34,4 +73,4 @@ import doctest doctest.testmod() - main()+ main()
https://raw.githubusercontent.com/TheAlgorithms/Python/HEAD/backtracking/combination_sum.py
Add return value explanations in docstrings
# https://www.geeksforgeeks.org/solve-crossword-puzzle/ def is_valid( puzzle: list[list[str]], word: str, row: int, col: int, vertical: bool ) -> bool: for i in range(len(word)): if vertical: if row + i >= len(puzzle) or puzzle[row + i][col] != "": return False elif col + i >= len(puzzle[0]) or puzzle[row][col + i] != "": return False return True def place_word( puzzle: list[list[str]], word: str, row: int, col: int, vertical: bool ) -> None: for i, char in enumerate(word): if vertical: puzzle[row + i][col] = char else: puzzle[row][col + i] = char def remove_word( puzzle: list[list[str]], word: str, row: int, col: int, vertical: bool ) -> None: for i in range(len(word)): if vertical: puzzle[row + i][col] = "" else: puzzle[row][col + i] = "" def solve_crossword(puzzle: list[list[str]], words: list[str]) -> bool: for row in range(len(puzzle)): for col in range(len(puzzle[0])): if puzzle[row][col] == "": for word in words: for vertical in [True, False]: if is_valid(puzzle, word, row, col, vertical): place_word(puzzle, word, row, col, vertical) words.remove(word) if solve_crossword(puzzle, words): return True words.append(word) remove_word(puzzle, word, row, col, vertical) return False return True if __name__ == "__main__": PUZZLE = [[""] * 3 for _ in range(3)] WORDS = ["cat", "dog", "car"] if solve_crossword(PUZZLE, WORDS): print("Solution found:") for row in PUZZLE: print(" ".join(row)) else: print("No solution found:")
--- +++ @@ -4,6 +4,26 @@ def is_valid( puzzle: list[list[str]], word: str, row: int, col: int, vertical: bool ) -> bool: + """ + Check if a word can be placed at the given position. + + >>> puzzle = [ + ... ['', '', '', ''], + ... ['', '', '', ''], + ... ['', '', '', ''], + ... ['', '', '', ''] + ... ] + >>> is_valid(puzzle, 'word', 0, 0, True) + True + >>> puzzle = [ + ... ['', '', '', ''], + ... ['', '', '', ''], + ... ['', '', '', ''], + ... ['', '', '', ''] + ... ] + >>> is_valid(puzzle, 'word', 0, 0, False) + True + """ for i in range(len(word)): if vertical: if row + i >= len(puzzle) or puzzle[row + i][col] != "": @@ -16,6 +36,19 @@ def place_word( puzzle: list[list[str]], word: str, row: int, col: int, vertical: bool ) -> None: + """ + Place a word at the given position. + + >>> puzzle = [ + ... ['', '', '', ''], + ... ['', '', '', ''], + ... ['', '', '', ''], + ... ['', '', '', ''] + ... ] + >>> place_word(puzzle, 'word', 0, 0, True) + >>> puzzle + [['w', '', '', ''], ['o', '', '', ''], ['r', '', '', ''], ['d', '', '', '']] + """ for i, char in enumerate(word): if vertical: puzzle[row + i][col] = char @@ -26,6 +59,19 @@ def remove_word( puzzle: list[list[str]], word: str, row: int, col: int, vertical: bool ) -> None: + """ + Remove a word from the given position. + + >>> puzzle = [ + ... ['w', '', '', ''], + ... ['o', '', '', ''], + ... ['r', '', '', ''], + ... ['d', '', '', ''] + ... ] + >>> remove_word(puzzle, 'word', 0, 0, True) + >>> puzzle + [['', '', '', ''], ['', '', '', ''], ['', '', '', ''], ['', '', '', '']] + """ for i in range(len(word)): if vertical: puzzle[row + i][col] = "" @@ -34,6 +80,29 @@ def solve_crossword(puzzle: list[list[str]], words: list[str]) -> bool: + """ + Solve the crossword puzzle using backtracking. + + >>> puzzle = [ + ... ['', '', '', ''], + ... ['', '', '', ''], + ... ['', '', '', ''], + ... ['', '', '', ''] + ... ] + + >>> words = ['word', 'four', 'more', 'last'] + >>> solve_crossword(puzzle, words) + True + >>> puzzle = [ + ... ['', '', '', ''], + ... ['', '', '', ''], + ... ['', '', '', ''], + ... ['', '', '', ''] + ... ] + >>> words = ['word', 'four', 'more', 'paragraphs'] + >>> solve_crossword(puzzle, words) + False + """ for row in range(len(puzzle)): for col in range(len(puzzle[0])): if puzzle[row][col] == "": @@ -59,4 +128,4 @@ for row in PUZZLE: print(" ".join(row)) else: - print("No solution found:")+ print("No solution found:")
https://raw.githubusercontent.com/TheAlgorithms/Python/HEAD/backtracking/crossword_puzzle_solver.py
End of preview. Expand in Data Studio

Python Docstring Diff Dataset

This dataset contains training samples for models that generate Python documentation patches. Each example provides a Python source file with its docstrings removed and a corresponding unified diff patch that restores the documentation.

The dataset is designed for training or evaluating language models that assist with:

  • Automatic code documentation
  • Docstring generation
  • Code review automation
  • Developer tooling
  • Dataset Structure

Each entry contains the following fields:

Field Description

instruction| Task instruction given to the model code| Python source code with docstrings removed response| A unified diff patch that adds the correct docstrings file| Original file path from the source project

Task Format

The model receives a Python file missing its documentation and must produce a unified diff that adds appropriate docstrings.

Example input:

def load_json(path):
    with open(path) as f:
        return json.load(f)

Example expected output:

--- a/file.py
+++ b/file.py
@@
 def load_json(path):
+    """Load JSON data from a file path."""
     with open(path) as f:
         return json.load(f)

Data Sources

The dataset was generated by scanning Python packages in github. Docstrings were extracted from functions, classes, async functions, methods, and modules using Python's AST parser. Low-quality documentation was filtered out using heuristics such as:

  • Minimum docstring length
  • Removal of TODO or placeholder documentation
  • Deduplication of similar examples

Intended Use

This dataset is useful for training models that perform:

  • automatic docstring generation
  • documentation patch creation
  • codebase documentation improvement tools
  • AI-assisted code review systems

License

This dataset is released under the MIT License.

Downloads last month
12