This commit is contained in:
Stephen Simpson
2025-12-10 11:16:55 -06:00
parent b4ffdb6560
commit 316610e932
14 changed files with 350 additions and 520 deletions

View File

@@ -1,7 +1,7 @@
# Multi-stage Dockerfile for Rocky Man # Multi-stage Dockerfile for Rocky Man
# This creates an architecture-independent image that can run on x86_64, aarch64, etc. # This creates an architecture-independent image that can run on x86_64, aarch64, etc.
FROM rockylinux/rockylinux:9 AS builder FROM rockylinux/rockylinux:10 AS builder
# Install system dependencies # Install system dependencies
RUN dnf install -y epel-release \ RUN dnf install -y epel-release \
@@ -18,7 +18,7 @@ RUN dnf install -y epel-release \
WORKDIR /app WORKDIR /app
# Copy project files # Copy project files
COPY pyproject.toml README.md LICENSE THIRD-PARTY-LICENSES.md ./ COPY pyproject.toml README.md LICENSE ./
COPY src ./src COPY src ./src
COPY templates ./templates COPY templates ./templates
@@ -26,7 +26,7 @@ COPY templates ./templates
RUN python3 -m pip install --no-cache-dir -e . RUN python3 -m pip install --no-cache-dir -e .
# Runtime stage # Runtime stage
FROM rockylinux/rockylinux:9 FROM rockylinux/rockylinux:10
# Install runtime dependencies # Install runtime dependencies
RUN dnf install -y epel-release \ RUN dnf install -y epel-release \
@@ -39,8 +39,8 @@ RUN dnf install -y epel-release \
&& dnf clean all && dnf clean all
# Copy Python packages and app from builder # Copy Python packages and app from builder
COPY --from=builder /usr/local/lib/python3.9/site-packages /usr/local/lib/python3.9/site-packages COPY --from=builder /usr/local/lib/python3.12/site-packages /usr/local/lib/python3.12/site-packages
COPY --from=builder /usr/local/lib64/python3.9/site-packages /usr/local/lib64/python3.9/site-packages COPY --from=builder /usr/local/lib64/python3.12/site-packages /usr/local/lib64/python3.12/site-packages
COPY --from=builder /app /app COPY --from=builder /app /app
WORKDIR /app WORKDIR /app

View File

@@ -1,6 +1,6 @@
MIT License MIT License
Copyright (c) 2024 Stephen Simpson Copyright (c) 2025 Ctrl IQ, Inc.
Permission is hereby granted, free of charge, to any person obtaining a copy Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal of this software and associated documentation files (the "Software"), to deal

272
README.md
View File

@@ -1,133 +1,108 @@
# Rocky Man 📚 # 🚀 Rocky Man 🚀
**Rocky Man** is a tool for generating searchable HTML documentation from Rocky Linux man pages across BaseOS and AppStream repositories for Rocky Linux 8, 9, and 10. **Rocky Man** is a tool for generating searchable HTML documentation from Rocky Linux man pages across BaseOS and AppStream repositories for Rocky Linux 8, 9, and 10.
## Features ## Features
- **Fast & Efficient**: Uses filelists.xml to pre-filter packages with man pages - Uses filelists.xml to pre-filter packages with man pages
- **Complete Coverage**: All packages from BaseOS and AppStream repositories - Processes packages from BaseOS and AppStream repositories
- **Container Ready**: Works on x86_64, aarch64, arm64, etc. - Runs in containers on x86_64, aarch64, and arm64 architectures
- **Smart Cleanup**: Automatic cleanup of temporary files (configurable) - Configurable cleanup of temporary files
- **Parallel Processing**: Concurrent downloads and conversions for maximum speed - Concurrent downloads and conversions
- **Multi-version**: Support for Rocky Linux 8, 9, and 10 simultaneously - Supports Rocky Linux 8, 9, and 10
## Quick Start ## Quick Start
### Podman (Recommended) ### Podman
```bash
# Build the image
podman build -t rocky-man .
# Generate man pages for Rocky Linux 9.6 (using defaults, no custom args)
podman run --rm -v $(pwd)/html:/data/html:Z rocky-man
# Generate for specific versions (requires explicit paths)
podman run --rm -v $(pwd)/html:/app/html:Z rocky-man \
--versions 8.10 9.6 10.0 --output-dir /app/html
# With verbose logging
podman run --rm -v $(pwd)/html:/app/html:Z rocky-man \
--versions 9.6 --output-dir /app/html --verbose
# Keep downloaded RPMs (mount the download directory)
podman run --rm -it \
-v $(pwd)/html:/app/html:Z \
-v $(pwd)/downloads:/app/tmp/downloads:Z \
rocky-man --versions 9.6 --keep-rpms \
--output-dir /app/html --download-dir /app/tmp/downloads --verbose
```
### Docker
```bash ```bash
# Build the image # Build the image
docker build -t rocky-man . docker build -t rocky-man .
# Generate man pages (using defaults, no custom args) # Generate for specific versions
docker run --rm -v $(pwd)/html:/data/html rocky-man podman run --rm -v $(pwd)/html:/data/html:Z rocky-man \
--versions 8.10 9.6 10.0
# Generate for specific versions (requires explicit paths) # Keep downloaded RPMs for multiple builds
docker run --rm -v $(pwd)/html:/app/html rocky-man \ podman run --rm -it \
--versions 9.6 --output-dir /app/html -v $(pwd)/html:/data/html:Z \
-v $(pwd)/downloads:/data/tmp/downloads:Z \
rocky-man --versions 9.6 --keep-rpms --verbose
```
# Interactive mode for debugging ### View the HTML Locally
docker run --rm -it -v $(pwd)/html:/app/html rocky-man \
--versions 9.6 --output-dir /app/html --verbose
# Keep downloaded RPMs (mount the download directory) Start a local web server to browse the generated documentation:
docker run --rm -it \
-v $(pwd)/html:/app/html \ ```bash
-v $(pwd)/downloads:/app/tmp/downloads \ python3 -m http.server -d ./html
rocky-man --versions 9.6 --keep-rpms \ ```
--output-dir /app/html --download-dir /app/tmp/downloads --verbose
Then open [http://127.0.0.1:8000](http://127.0.0.1:8000) in your browser.
To use a different port:
```bash
python3 -m http.server 8080 -d ./html
``` ```
### Directory Structure in Container ### Directory Structure in Container
The container uses different paths depending on whether you pass custom arguments: The container uses the following paths:
**Without custom arguments** (using Dockerfile CMD defaults):
- `/data/html` - Generated HTML output - `/data/html` - Generated HTML output
- `/data/tmp/downloads` - Downloaded RPM files - `/data/tmp/downloads` - Downloaded RPM files
- `/data/tmp/extracts` - Extracted man page files - `/data/tmp/extracts` - Extracted man page files
**With custom arguments** (argparse defaults from working directory `/app`): These paths are used by default and can be overridden with command-line arguments if needed.
- `/app/html` - Generated HTML output
- `/app/tmp/downloads` - Downloaded RPM files
- `/app/tmp/extracts` - Extracted man page files
**Important**: When passing custom arguments, the container's CMD is overridden and the code falls back to relative paths (`./html` = `/app/html`). You must explicitly specify `--output-dir /app/html --download-dir /app/tmp/downloads` to match your volume mounts. Without this, files are written inside the container and lost when it stops (especially with `--rm`).
### Local Development ### Local Development
#### Prerequisites **Important**: Rocky Man requires Rocky Linux because it uses the system's native `python3-dnf` module to interact with DNF repositories. This module cannot be installed via pip and must come from the Rocky Linux system packages.
- Python 3.9+ #### Option 1: Run in a Rocky Linux Container (Recommended)
- pip (Python package manager)
- mandoc (man page converter)
- Rocky Linux system or container (for DNF)
#### Installation
```bash ```bash
# On Rocky Linux, install system dependencies # Start a Rocky Linux container with your project mounted
podman run --rm -it -v $(pwd):/workspace:Z rockylinux/rockylinux:9 /bin/bash
# Inside the container, navigate to the project
cd /workspace
# Install epel-release for mandoc
dnf install -y epel-release
# Install system dependencies
dnf install -y python3 python3-pip python3-dnf mandoc rpm-build dnf-plugins-core dnf install -y python3 python3-pip python3-dnf mandoc rpm-build dnf-plugins-core
# Install Python dependencies # Install Python dependencies
pip3 install -e . pip3 install -e .
# Run the tool
python3 -m rocky_man.main --versions 9.6 --output-dir ./html/
``` ```
#### Usage #### Option 2: On a Native Rocky Linux System
```bash ```bash
# Generate man pages for Rocky 9.6 # Install epel-release for mandoc
python -m rocky_man.main --versions 9.6 dnf install -y epel-release
# Generate for multiple versions (default) # Install system dependencies
python -m rocky_man.main --versions 8.10 9.6 10.0 dnf install -y python3 python3-pip python3-dnf mandoc rpm-build dnf-plugins-core
# Custom output directory # Install Python dependencies
python -m rocky_man.main --output-dir /var/www/html/man --versions 9.6 pip3 install -e .
# Keep downloaded RPMs for debugging # Run the tool
python -m rocky_man.main --keep-rpms --verbose python3 -m rocky_man.main --versions 9.6 --output-dir ./html/
# Adjust parallelism for faster processing
python -m rocky_man.main --parallel-downloads 10 --parallel-conversions 20
# Use a different mirror
python -m rocky_man.main --mirror https://mirrors.example.com/
# Only BaseOS (faster)
python -m rocky_man.main --repo-types BaseOS --versions 9.6
``` ```
## Architecture ## Architecture
Rocky Man is organized into clean, modular components: Rocky Man is organized into components:
``` ```text
rocky-man/ rocky-man/
├── src/rocky_man/ ├── src/rocky_man/
│ ├── models/ # Data models (Package, ManFile) │ ├── models/ # Data models (Package, ManFile)
@@ -143,22 +118,28 @@ rocky-man/
### How It Works ### How It Works
1. **Package Discovery** - Parse repository `filelists.xml` to identify packages with man pages 1. **Package Discovery** - Parses repository metadata (`repodata/repomd.xml` and `filelists.xml.gz`) to identify packages containing files in `/usr/share/man/` directories
2. **Smart Download** - Download only packages containing man pages with parallel downloads 2. **Package Download** - Downloads identified RPM packages using DNF, with configurable parallel downloads (default: 5)
3. **Extraction** - Extract man page files from RPM packages 3. **Man Page Extraction** - Extracts man page files from RPMs using `rpm2cpio`, filtering by section and language based on configuration
4. **Conversion** - Convert troff format to HTML using mandoc 4. **HTML Conversion** - Converts troff-formatted man pages to HTML using mandoc, with parallel processing (default: 10 workers)
5. **Web Generation** - Wrap HTML in templates and generate search index 5. **Cross-Reference Linking** - Parses converted HTML to add hyperlinks between man page references (e.g., `bash(1)` becomes clickable)
6. **Cleanup** - Automatically remove temporary files (configurable) 6. **Index Generation** - Creates search indexes (JSON/gzipped) and navigation pages using Jinja2 templates
7. **Cleanup** - Removes temporary files (RPMs and extracted content) unless `--keep-rpms` or `--keep-extracts` is specified
## Command Line Options ## Command Line Options
``` ```bash
usage: rocky-man [-h] [--versions VERSIONS [VERSIONS ...]] usage: main.py [-h] [--versions VERSIONS [VERSIONS ...]]
[--repo-types REPO_TYPES [REPO_TYPES ...]] [--repo-types REPO_TYPES [REPO_TYPES ...]]
[--output-dir OUTPUT_DIR] [--download-dir DOWNLOAD_DIR] [--output-dir OUTPUT_DIR] [--download-dir DOWNLOAD_DIR]
[--extract-dir EXTRACT_DIR] [--keep-rpms] [--keep-extracts] [--extract-dir EXTRACT_DIR] [--keep-rpms] [--keep-extracts]
[--parallel-downloads N] [--parallel-conversions N] [--parallel-downloads PARALLEL_DOWNLOADS]
[--mirror URL] [--template-dir DIR] [-v] [--parallel-conversions PARALLEL_CONVERSIONS] [--mirror MIRROR]
[--vault] [--existing-versions [VERSION ...]]
[--template-dir TEMPLATE_DIR] [-v]
[--skip-sections [SKIP_SECTIONS ...]]
[--skip-packages [SKIP_PACKAGES ...]] [--skip-languages]
[--keep-languages] [--allow-all-sections]
Generate HTML documentation for Rocky Linux man pages Generate HTML documentation for Rocky Linux man pages
@@ -169,11 +150,11 @@ optional arguments:
--repo-types REPO_TYPES [REPO_TYPES ...] --repo-types REPO_TYPES [REPO_TYPES ...]
Repository types to process (default: BaseOS AppStream) Repository types to process (default: BaseOS AppStream)
--output-dir OUTPUT_DIR --output-dir OUTPUT_DIR
Output directory for HTML files (default: ./html) Output directory for HTML files (default: /data/html)
--download-dir DOWNLOAD_DIR --download-dir DOWNLOAD_DIR
Directory for downloading packages (default: ./tmp/downloads) Directory for downloading packages (default: /data/tmp/downloads)
--extract-dir EXTRACT_DIR --extract-dir EXTRACT_DIR
Directory for extracting man pages (default: ./tmp/extracts) Directory for extracting man pages (default: /data/tmp/extracts)
--keep-rpms Keep downloaded RPM files after processing --keep-rpms Keep downloaded RPM files after processing
--keep-extracts Keep extracted man files after processing --keep-extracts Keep extracted man files after processing
--parallel-downloads PARALLEL_DOWNLOADS --parallel-downloads PARALLEL_DOWNLOADS
@@ -196,80 +177,11 @@ optional arguments:
--allow-all-sections Include all man sections (overrides --skip-sections) --allow-all-sections Include all man sections (overrides --skip-sections)
``` ```
## Troubleshooting ## Attribution
### DNF Errors The man pages displayed in this documentation are sourced from Rocky Linux distribution packages. All man page content is copyrighted by their respective authors and distributed under the licenses specified within each man page.
**Problem**: `dnf` module not found or repository errors This tool generates HTML documentation from man pages contained in Rocky Linux packages but does not modify the content of the man pages themselves.
**Solution**: Ensure you're running on Rocky Linux or in a Rocky Linux container:
```bash
# Run in Rocky Linux container
podman run --rm -it -v $(pwd):/app rockylinux:9 /bin/bash
cd /app
# Install dependencies
dnf install -y python3 python3-dnf mandoc rpm-build dnf-plugins-core
# Run the script
python3 -m rocky_man.main --versions 9.6
```
### Mandoc Not Found
**Problem**: `mandoc: command not found`
**Solution**: Install mandoc:
```bash
dnf install -y mandoc
```
### Permission Errors in Container
**Problem**: Cannot write to mounted volume
**Solution**: Use the `:Z` flag with podman for SELinux contexts:
```bash
podman run --rm -v $(pwd)/html:/data/html:Z rocky-man
```
For Docker, ensure the volume path is absolute:
```bash
docker run --rm -v "$(pwd)/html":/data/html rocky-man
```
### Out of Memory
**Problem**: Process killed due to memory
**Solution**: Reduce parallelism:
```bash
python -m rocky_man.main --parallel-downloads 2 --parallel-conversions 5
```
### Slow Downloads
**Problem**: Downloads are very slow
**Solution**: Use a closer mirror:
```bash
# Find mirrors at: https://mirrors.rockylinux.org/mirrormanager/mirrors
python -m rocky_man.main --mirror https://mirror.example.com/rocky/
```
## Performance Tips
1. **Use closer mirrors** - Significant speed improvement for downloads
2. **Increase parallelism** - If you have bandwidth: `--parallel-downloads 15`
3. **Process one repo at a time** - Use `--repo-types BaseOS` first, then `--repo-types AppStream`
4. **Keep RPMs for re-runs** - Use `--keep-rpms` if testing
5. **Run in container** - More consistent performance
## License ## License
@@ -277,20 +189,16 @@ This project is licensed under the MIT License - see the [LICENSE](LICENSE) file
### Third-Party Software ### Third-Party Software
This project uses several open source components. See [THIRD-PARTY-LICENSES.md](THIRD-PARTY-LICENSES.md) for complete license information and attributions. This project uses several open source components.
Key dependencies include:
- **mandoc** - Man page converter (ISC License)
- **python3-dnf** - DNF package manager Python bindings (GPL-2.0-or-later)
- **Fuse.js** - Client-side search (Apache 2.0)
- **Python packages**: requests, rpmfile, Jinja2, lxml, zstandard
- **Fonts**: Red Hat Display, Red Hat Text, JetBrains Mono (SIL OFL)
### Trademark Notice ### Trademark Notice
Rocky Linux is a trademark of the Rocky Enterprise Software Foundation (RESF). This project is not officially affiliated with or endorsed by RESF. All trademarks are the property of their respective owners. This project complies with RESF's trademark usage guidelines. Rocky Linux is a trademark of the Rocky Enterprise Software Foundation (RESF). This project is not officially affiliated with or endorsed by RESF. All trademarks are the property of their respective owners. This project complies with RESF's trademark usage guidelines.
## Contributing
Contributions welcome! Please:
1. Fork the repository
2. Create a feature branch (`git checkout -b feature/amazing-feature`)
3. Make your changes with proper documentation
4. Test thoroughly
5. Commit with clear messages (`git commit -m 'feat: add amazing feature'`)
6. Push to your branch (`git push origin feature/amazing-feature`)
7. Open a Pull Request

View File

@@ -7,13 +7,13 @@ license = {text = "MIT"}
authors = [ authors = [
{ name = "Stephen Simpson", email = "ssimpson89@users.noreply.github.com" } { name = "Stephen Simpson", email = "ssimpson89@users.noreply.github.com" }
] ]
requires-python = ">=3.9" requires-python = ">=3.12"
dependencies = [ dependencies = [
"requests>=2.31.0", "requests>=2.32.0",
"rpmfile>=2.0.0", "rpmfile>=2.1.0",
"jinja2>=3.1.0", "jinja2>=3.1.0",
"lxml>=5.0.0", "lxml>=6.0.0",
"zstandard>=0.18.0", "zstandard>=0.25.0",
] ]
[project.scripts] [project.scripts]

View File

@@ -43,18 +43,13 @@ def process_version(config: Config, version: str, template_dir: Path) -> bool:
all_man_files = [] all_man_files = []
# Process each repository type
for repo_type in config.repo_types: for repo_type in config.repo_types:
logger.info(f"Processing {repo_type} repository") logger.info(f"Processing {repo_type} repository")
# Use first available architecture (man pages are arch-independent)
arch = config.architectures[0] arch = config.architectures[0]
# Create cache dir for this repo
cache_dir = config.download_dir / f".cache/{version}/{repo_type}" cache_dir = config.download_dir / f".cache/{version}/{repo_type}"
try: try:
# Initialize repository manager
repo_manager = RepoManager( repo_manager = RepoManager(
config=config, config=config,
version=version, version=version,
@@ -64,7 +59,6 @@ def process_version(config: Config, version: str, template_dir: Path) -> bool:
download_dir=version_download_dir, download_dir=version_download_dir,
) )
# List packages (with man pages only)
packages = repo_manager.list_packages(with_manpages_only=True) packages = repo_manager.list_packages(with_manpages_only=True)
if not packages: if not packages:
@@ -73,7 +67,6 @@ def process_version(config: Config, version: str, template_dir: Path) -> bool:
logger.info(f"Found {len(packages)} packages with man pages in {repo_type}") logger.info(f"Found {len(packages)} packages with man pages in {repo_type}")
# Filter out packages that should be skipped
if config.skip_packages: if config.skip_packages:
original_count = len(packages) original_count = len(packages)
packages = [ packages = [
@@ -86,13 +79,11 @@ def process_version(config: Config, version: str, template_dir: Path) -> bool:
) )
logger.info(f"Processing {len(packages)} packages") logger.info(f"Processing {len(packages)} packages")
# Download packages
logger.info("Downloading packages...") logger.info("Downloading packages...")
downloaded = repo_manager.download_packages( downloaded = repo_manager.download_packages(
packages, max_workers=config.parallel_downloads packages, max_workers=config.parallel_downloads
) )
# Extract man pages
logger.info("Extracting man pages...") logger.info("Extracting man pages...")
extractor = ManPageExtractor( extractor = ManPageExtractor(
version_extract_dir, version_extract_dir,
@@ -105,7 +96,6 @@ def process_version(config: Config, version: str, template_dir: Path) -> bool:
logger.info(f"Extracted {len(man_files)} man pages") logger.info(f"Extracted {len(man_files)} man pages")
# Read content for each man file
logger.info("Reading man page content...") logger.info("Reading man page content...")
man_files_with_content = [] man_files_with_content = []
for man_file in man_files: for man_file in man_files:
@@ -113,7 +103,6 @@ def process_version(config: Config, version: str, template_dir: Path) -> bool:
if content: if content:
man_files_with_content.append((man_file, content)) man_files_with_content.append((man_file, content))
# Convert to HTML
logger.info("Converting man pages to HTML...") logger.info("Converting man pages to HTML...")
converter = ManPageConverter(version_output_dir) converter = ManPageConverter(version_output_dir)
converted = converter.convert_many( converted = converter.convert_many(
@@ -122,7 +111,6 @@ def process_version(config: Config, version: str, template_dir: Path) -> bool:
all_man_files.extend(converted) all_man_files.extend(converted)
# Cleanup if requested
if not config.keep_rpms: if not config.keep_rpms:
logger.info("Cleaning up downloaded packages...") logger.info("Cleaning up downloaded packages...")
for package in downloaded: for package in downloaded:
@@ -141,30 +129,21 @@ def process_version(config: Config, version: str, template_dir: Path) -> bool:
logger.error(f"No man pages were successfully processed for version {version}") logger.error(f"No man pages were successfully processed for version {version}")
return False return False
# Generate web pages
logger.info("Generating web pages...") logger.info("Generating web pages...")
web_gen = WebGenerator(template_dir, config.output_dir) web_gen = WebGenerator(template_dir, config.output_dir)
# Generate search index
search_index = web_gen.generate_search_index(all_man_files, version) search_index = web_gen.generate_search_index(all_man_files, version)
web_gen.save_search_index(search_index, version) web_gen.save_search_index(search_index, version)
# Generate index page
web_gen.generate_index(version, search_index) web_gen.generate_index(version, search_index)
# Generate packages index page
web_gen.generate_packages_index(version, search_index) web_gen.generate_packages_index(version, search_index)
# Set HTML paths for all man files
for man_file in all_man_files: for man_file in all_man_files:
if not man_file.html_path: if not man_file.html_path:
man_file.html_path = web_gen._get_manpage_path(man_file, version) man_file.html_path = web_gen._get_manpage_path(man_file, version)
# Link cross-references between man pages
logger.info("Linking cross-references...") logger.info("Linking cross-references...")
converter.link_cross_references(all_man_files, version) converter.link_cross_references(all_man_files, version)
# Wrap man pages in templates
logger.info("Generating man page HTML...") logger.info("Generating man page HTML...")
for man_file in all_man_files: for man_file in all_man_files:
web_gen.generate_manpage_html(man_file, version) web_gen.generate_manpage_html(man_file, version)
@@ -198,22 +177,22 @@ def main():
parser.add_argument( parser.add_argument(
"--output-dir", "--output-dir",
type=Path, type=Path,
default=Path("./html"), default=Path("/data/html"),
help="Output directory for HTML files (default: ./html)", help="Output directory for HTML files (default: /data/html)",
) )
parser.add_argument( parser.add_argument(
"--download-dir", "--download-dir",
type=Path, type=Path,
default=Path("./tmp/downloads"), default=Path("/data/tmp/downloads"),
help="Directory for downloading packages (default: ./tmp/downloads)", help="Directory for downloading packages (default: /data/tmp/downloads)",
) )
parser.add_argument( parser.add_argument(
"--extract-dir", "--extract-dir",
type=Path, type=Path,
default=Path("./tmp/extracts"), default=Path("/data/tmp/extracts"),
help="Directory for extracting man pages (default: ./tmp/extracts)", help="Directory for extracting man pages (default: /data/tmp/extracts)",
) )
parser.add_argument( parser.add_argument(
@@ -307,21 +286,17 @@ def main():
args = parser.parse_args() args = parser.parse_args()
# Setup logging
setup_logging(args.verbose) setup_logging(args.verbose)
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
# Handle filtering options skip_languages = True
skip_languages = True # default
if args.keep_languages: if args.keep_languages:
skip_languages = False skip_languages = False
elif args.skip_languages is not None: elif args.skip_languages is not None:
skip_languages = args.skip_languages skip_languages = args.skip_languages
# Determine content directory
content_dir = "vault/rocky" if args.vault else "pub/rocky" content_dir = "vault/rocky" if args.vault else "pub/rocky"
# Create configuration
config = Config( config = Config(
base_url=args.mirror, base_url=args.mirror,
content_dir=content_dir, content_dir=content_dir,
@@ -340,7 +315,6 @@ def main():
allow_all_sections=args.allow_all_sections, allow_all_sections=args.allow_all_sections,
) )
# Get existing versions from scan and argument
scanned_versions = [ scanned_versions = [
d.name d.name
for d in config.output_dir.iterdir() for d in config.output_dir.iterdir()
@@ -348,7 +322,6 @@ def main():
] ]
arg_versions = args.existing_versions or [] arg_versions = args.existing_versions or []
# Sort versions numerically by (major, minor)
def version_key(v): def version_key(v):
try: try:
major, minor = v.split(".") major, minor = v.split(".")
@@ -365,7 +338,6 @@ def main():
logger.info(f"Repositories: {', '.join(config.repo_types)}") logger.info(f"Repositories: {', '.join(config.repo_types)}")
logger.info(f"Output directory: {config.output_dir}") logger.info(f"Output directory: {config.output_dir}")
# Log filtering configuration
if config.skip_sections: if config.skip_sections:
logger.info(f"Skipping man sections: {', '.join(config.skip_sections)}") logger.info(f"Skipping man sections: {', '.join(config.skip_sections)}")
else: else:
@@ -379,7 +351,6 @@ def main():
else: else:
logger.info("Including all languages") logger.info("Including all languages")
# Process each version
processed_versions = [] processed_versions = []
for version in config.versions: for version in config.versions:
try: try:
@@ -392,11 +363,13 @@ def main():
logger.error("No versions were successfully processed") logger.error("No versions were successfully processed")
return 1 return 1
# Generate root index
logger.info("Generating root index page...") logger.info("Generating root index page...")
web_gen = WebGenerator(args.template_dir, config.output_dir) web_gen = WebGenerator(args.template_dir, config.output_dir)
web_gen.generate_root_index(all_versions) web_gen.generate_root_index(all_versions)
logger.info("Generating 404 page...")
web_gen.generate_404_page()
logger.info("=" * 60) logger.info("=" * 60)
logger.info("Processing complete!") logger.info("Processing complete!")
logger.info(f"Generated documentation for: {', '.join(processed_versions)}") logger.info(f"Generated documentation for: {', '.join(processed_versions)}")

View File

@@ -35,35 +35,22 @@ class ManFile:
self._parse_path() self._parse_path()
def _parse_path(self): def _parse_path(self):
"""Extract section, name, and language from the file path. """Extract section, name, and language from the file path."""
Example paths:
/usr/share/man/man1/bash.1.gz
/usr/share/man/es/man1/bash.1.gz
/usr/share/man/man3/printf.3.gz
"""
parts = self.file_path.parts parts = self.file_path.parts
filename = self.file_path.name filename = self.file_path.name
# Remove .gz extension if present
if filename.endswith('.gz'): if filename.endswith('.gz'):
filename = filename[:-3] filename = filename[:-3]
# Extract section from parent directory (e.g., 'man1', 'man3p', 'man3pm')
for part in reversed(parts): for part in reversed(parts):
if part.startswith('man') and len(part) > 3: if part.startswith('man') and len(part) > 3:
# Check if it starts with 'man' followed by a digit
if part[3].isdigit(): if part[3].isdigit():
self.section = part[3:] self.section = part[3:]
break break
# Extract section from filename if not found yet (e.g., 'foo.3pm' -> section '3pm')
# and extract name
name_parts = filename.split('.') name_parts = filename.split('.')
if len(name_parts) >= 2: if len(name_parts) >= 2:
# Try to identify section from last part
potential_section = name_parts[-1] potential_section = name_parts[-1]
# Section is typically digit optionally followed by letters (1, 3p, 3pm, etc.)
if potential_section and potential_section[0].isdigit(): if potential_section and potential_section[0].isdigit():
if not self.section: if not self.section:
self.section = potential_section self.section = potential_section
@@ -73,14 +60,10 @@ class ManFile:
else: else:
self.name = name_parts[0] self.name = name_parts[0]
# Check for language subdirectory
# Pattern: /usr/share/man/<lang>/man<section>/
for i, part in enumerate(parts): for i, part in enumerate(parts):
if part == 'man' and i + 1 < len(parts): if part == 'man' and i + 1 < len(parts):
next_part = parts[i + 1] next_part = parts[i + 1]
# If next part is not 'man<digit>', it's a language code
if not (next_part.startswith('man') and next_part[3:].isdigit()): if not (next_part.startswith('man') and next_part[3:].isdigit()):
# Common language codes are 2-5 chars (en, es, pt_BR, etc.)
if len(next_part) <= 5: if len(next_part) <= 5:
self.language = next_part self.language = next_part
break break
@@ -93,14 +76,12 @@ class ManFile:
@property @property
def html_filename(self) -> str: def html_filename(self) -> str:
"""Get the HTML filename for this man page.""" """Get the HTML filename for this man page."""
# Clean name for filesystem safety
safe_name = self._clean_filename(self.name) safe_name = self._clean_filename(self.name)
suffix = f".{self.language}" if self.language else "" suffix = f".{self.language}" if self.language else ""
return f"{safe_name}.{self.section}{suffix}.html" return f"{safe_name}.{self.section}{suffix}.html"
def _clean_filename(self, name: str) -> str: def _clean_filename(self, name: str) -> str:
"""Clean filename for filesystem safety.""" """Clean filename for filesystem safety."""
# Replace problematic characters
name = name.replace('/', '_') name = name.replace('/', '_')
name = name.replace(':', '_') name = name.replace(':', '_')
name = re.sub(r'\.\.', '__', name) name = re.sub(r'\.\.', '__', name)
@@ -108,19 +89,13 @@ class ManFile:
@property @property
def uri_path(self) -> str: def uri_path(self) -> str:
"""Get the URI path for this man page (relative to version root). """Get the URI path for this man page (relative to version root)."""
Returns path like: 'bash/man1/bash.1.html'
"""
if not self.html_path: if not self.html_path:
return "" return ""
# Get path relative to the version directory
# Assuming structure: html/<version>/<package>/<section>/<file>.html
parts = self.html_path.parts parts = self.html_path.parts
try: try:
# Find the version part (e.g., '9.5') and return everything after it
for i, part in enumerate(parts): for i, part in enumerate(parts):
if re.match(r'\d+\.\d+', part): # Version pattern if re.match(r'\d+\.\d+', part):
return '/'.join(parts[i+1:]) return '/'.join(parts[i+1:])
except (ValueError, IndexError): except (ValueError, IndexError):
pass pass

View File

@@ -38,15 +38,11 @@ class ManPageConverter:
def _check_mandoc() -> bool: def _check_mandoc() -> bool:
"""Check if mandoc is available.""" """Check if mandoc is available."""
try: try:
# Run mandoc with no arguments - it will show usage and exit
# We just want to verify the command exists, not that it succeeds
subprocess.run(["mandoc"], capture_output=True, timeout=5) subprocess.run(["mandoc"], capture_output=True, timeout=5)
return True return True
except FileNotFoundError: except FileNotFoundError:
# mandoc command not found
return False return False
except Exception: except Exception:
# Other errors (timeout, etc) - but mandoc exists
return True return True
def convert(self, man_file: ManFile, content: str) -> bool: def convert(self, man_file: ManFile, content: str) -> bool:
@@ -60,26 +56,20 @@ class ManPageConverter:
True if conversion successful True if conversion successful
""" """
try: try:
# Run mandoc to convert to HTML
html = self._run_mandoc(content) html = self._run_mandoc(content)
if not html: if not html:
logger.warning(f"mandoc produced no output for {man_file.display_name}") logger.warning(f"mandoc produced no output for {man_file.display_name}")
return False return False
# Clean up HTML
html = self._clean_html(html) html = self._clean_html(html)
# Check if mandoc output indicates this is a symlink/redirect # Check if output indicates this is a symlink/redirect
# Pattern: <div class="manual-text">/usr/share/man/man8/target.8.gz</div>
# or: <div class="manual-text">See the file /usr/share/man/man8/target.8.</div>
# or: <div class="manual-text">See the file man1/builtin.1.</div>
symlink_match = re.search( symlink_match = re.search(
r'<div class="manual-text">.*?(?:See the file )?((?:/usr/share/man/)?man\d+[a-z]*/([^/]+)\.(\d+[a-z]*)(?:\.gz)?)\..*?</div>', r'<div class="manual-text">.*?(?:See the file )?((?:/usr/share/man/)?man\d+[a-z]*/([^/]+)\.(\d+[a-z]*)(?:\.gz)?)\..*?</div>',
html, html,
re.DOTALL, re.DOTALL,
) )
if not symlink_match: if not symlink_match:
# Try simpler pattern without "See the file" or period
symlink_match = re.search( symlink_match = re.search(
r'<div class="manual-text">.*?((?:/usr/share/man/)?man\d+[a-z]*/([^/<]+)\.(\d+[a-z]*)(?:\.gz)?).*?</div>', r'<div class="manual-text">.*?((?:/usr/share/man/)?man\d+[a-z]*/([^/<]+)\.(\d+[a-z]*)(?:\.gz)?).*?</div>',
html, html,
@@ -94,14 +84,9 @@ class ManPageConverter:
) )
html = self._generate_redirect_html({"name": name, "section": section}) html = self._generate_redirect_html({"name": name, "section": section})
# Store in ManFile object
man_file.html_content = html man_file.html_content = html
# Determine output path
output_path = self._get_output_path(man_file) output_path = self._get_output_path(man_file)
man_file.html_path = output_path man_file.html_path = output_path
# Save HTML file
output_path.parent.mkdir(parents=True, exist_ok=True) output_path.parent.mkdir(parents=True, exist_ok=True)
with open(output_path, "w", encoding="utf-8") as f: with open(output_path, "w", encoding="utf-8") as f:
f.write(html) f.write(html)
@@ -128,13 +113,11 @@ class ManPageConverter:
converted = [] converted = []
with ThreadPoolExecutor(max_workers=max_workers) as executor: with ThreadPoolExecutor(max_workers=max_workers) as executor:
# Submit all conversion tasks
future_to_manfile = { future_to_manfile = {
executor.submit(self.convert, man_file, content): man_file executor.submit(self.convert, man_file, content): man_file
for man_file, content in man_files for man_file, content in man_files
} }
# Collect results
for future in as_completed(future_to_manfile): for future in as_completed(future_to_manfile):
man_file = future_to_manfile[future] man_file = future_to_manfile[future]
try: try:
@@ -166,7 +149,6 @@ class ManPageConverter:
if result.returncode != 0: if result.returncode != 0:
stderr = result.stderr.decode("utf-8", errors="replace") stderr = result.stderr.decode("utf-8", errors="replace")
logger.warning(f"mandoc returned error: {stderr}") logger.warning(f"mandoc returned error: {stderr}")
# Sometimes mandoc returns non-zero but still produces output
if result.stdout: if result.stdout:
return result.stdout.decode("utf-8", errors="replace") return result.stdout.decode("utf-8", errors="replace")
return None return None
@@ -189,15 +171,11 @@ class ManPageConverter:
Returns: Returns:
Cleaned HTML Cleaned HTML
""" """
# Remove empty parentheses in header cells
html = re.sub( html = re.sub(
r'<td class="head-ltitle">\(\)</td>', '<td class="head-ltitle"></td>', html r'<td class="head-(ltitle|rtitle)">\(\)</td>',
r'<td class="head-\1"></td>',
html,
) )
html = re.sub(
r'<td class="head-rtitle">\(\)</td>', '<td class="head-rtitle"></td>', html
)
# Strip leading/trailing whitespace
html = html.strip() html = html.strip()
return html return html
@@ -213,12 +191,8 @@ class ManPageConverter:
""" """
name = target_info["name"] name = target_info["name"]
section = target_info["section"] section = target_info["section"]
# Generate the relative path to the target man page
# Symlinks are in the same package, just different file names
target_filename = f"{name}.{section}.html" target_filename = f"{name}.{section}.html"
# Generate simple redirect HTML with a working hyperlink
html = f'''<div class="symlink-notice" style="padding: 2rem; text-align: center; background-color: var(--bg-tertiary); border-radius: 8px; border: 1px solid var(--border-color);"> html = f'''<div class="symlink-notice" style="padding: 2rem; text-align: center; background-color: var(--bg-tertiary); border-radius: 8px; border: 1px solid var(--border-color);">
<p style="font-size: 1.2rem; margin-bottom: 1.5rem; color: var(--text-primary);"> <p style="font-size: 1.2rem; margin-bottom: 1.5rem; color: var(--text-primary);">
This is an alias for <b>{name}</b>({section}). This is an alias for <b>{name}</b>({section}).
@@ -230,35 +204,26 @@ class ManPageConverter:
return html return html
def link_cross_references(self, man_files: List[ManFile], version: str) -> None: def link_cross_references(self, man_files: List[ManFile], version: str) -> None:
"""Add hyperlinks to cross-references in SEE ALSO sections. """Add hyperlinks to cross-references in man pages.
Goes through all converted HTML files and converts man page references
like pty(4) into working hyperlinks.
Args: Args:
man_files: List of all converted ManFile objects man_files: List of all converted ManFile objects
version: Rocky Linux version
""" """
# Build lookup index: (name, section) -> relative_path
lookup = {} lookup = {}
for mf in man_files: for mf in man_files:
key = (mf.name.lower(), str(mf.section)) key = (mf.name.lower(), str(mf.section))
if key not in lookup: if key not in lookup:
# Store the relative path from the version root
lookup[key] = f"{mf.package_name}/man{mf.section}/{mf.html_filename}" lookup[key] = f"{mf.package_name}/man{mf.section}/{mf.html_filename}"
logger.info(f"Linking cross-references across {len(man_files)} man pages...") logger.info(f"Linking cross-references across {len(man_files)} man pages...")
# Process each man page HTML content
for man_file in man_files: for man_file in man_files:
if not man_file.html_content: if not man_file.html_content:
continue continue
try: try:
html = man_file.html_content html = man_file.html_content
# Find and replace man page references
# Mandoc outputs references as: <b>name</b>(section)
# Pattern matches both <b>name</b>(section) and plain name(section)
pattern = ( pattern = (
r"<b>([\w\-_.]+)</b>\((\d+[a-z]*)\)|\b([\w\-_.]+)\((\d+[a-z]*)\)" r"<b>([\w\-_.]+)</b>\((\d+[a-z]*)\)|\b([\w\-_.]+)\((\d+[a-z]*)\)"
) )
@@ -266,42 +231,25 @@ class ManPageConverter:
def replace_reference(match): def replace_reference(match):
full_match = match.group(0) full_match = match.group(0)
# Check if this match is already inside an <a> tag # Skip if already inside an <a> tag
# Look back up to 500 chars for context
before_text = html[max(0, match.start() - 500) : match.start()] before_text = html[max(0, match.start() - 500) : match.start()]
# Find the last <a and last </a> before this match
last_open = before_text.rfind("<a ") last_open = before_text.rfind("<a ")
last_close = before_text.rfind("</a>") last_close = before_text.rfind("</a>")
# If the last <a> is after the last </a>, we're inside a link
if last_open > last_close: if last_open > last_close:
return full_match return full_match
if match.group(1): # <b>name</b>(section) format name = (match.group(1) or match.group(3)).lower()
name = match.group(1).lower() section = match.group(2) or match.group(4)
section = match.group(2)
else: # plain name(section) format
name = match.group(3).lower()
section = match.group(4)
# Look up the referenced man page
key = (name, section) key = (name, section)
if key in lookup: if key in lookup:
# Calculate relative path from current file to target
target_path = lookup[key] target_path = lookup[key]
# File structure: output_dir/version/package_name/manN/file.html
# Need to go up 3 levels to reach output root, then down to version/target
# Current: version/package_name/manN/file.html
# Target: version/other_package/manM/file.html
rel_path = f"../../../{version}/{target_path}" rel_path = f"../../../{version}/{target_path}"
return f'<a href="{rel_path}">{full_match}</a>' return f'<a href="{rel_path}">{full_match}</a>'
return full_match return full_match
updated_html = re.sub(pattern, replace_reference, html) updated_html = re.sub(pattern, replace_reference, html)
# Update the content if something changed
if updated_html != html: if updated_html != html:
man_file.html_content = updated_html man_file.html_content = updated_html
@@ -313,23 +261,7 @@ class ManPageConverter:
logger.info("Cross-reference linking complete") logger.info("Cross-reference linking complete")
def _get_output_path(self, man_file: ManFile) -> Path: def _get_output_path(self, man_file: ManFile) -> Path:
"""Determine output path for HTML file. """Determine output path for HTML file."""
Structure: output_dir/<package>/<section>/<name>.<section>[.<lang>].html
Args:
man_file: ManFile object
Returns:
Path for HTML output
"""
# Package directory
pkg_dir = self.output_dir / man_file.package_name pkg_dir = self.output_dir / man_file.package_name
# Section directory (man1, man2, etc.)
section_dir = pkg_dir / f"man{man_file.section}" section_dir = pkg_dir / f"man{man_file.section}"
return section_dir / man_file.html_filename
# HTML filename
filename = man_file.html_filename
return section_dir / filename

View File

@@ -48,7 +48,6 @@ class ManPageExtractor:
logger.warning(f"Package file not found: {package.name}") logger.warning(f"Package file not found: {package.name}")
return [] return []
# Create extraction directory for this package
pkg_extract_dir = self.extract_dir / package.name pkg_extract_dir = self.extract_dir / package.name
pkg_extract_dir.mkdir(parents=True, exist_ok=True) pkg_extract_dir.mkdir(parents=True, exist_ok=True)
@@ -59,33 +58,39 @@ class ManPageExtractor:
with rpmfile.open(package.download_path) as rpm: with rpmfile.open(package.download_path) as rpm:
for member in rpm.getmembers(): for member in rpm.getmembers():
# Check if this is a man page file
if not self._is_manpage(member.name): if not self._is_manpage(member.name):
continue continue
# Create ManFile object # Sanitize path to prevent path traversal attacks
extract_path = pkg_extract_dir / member.name.lstrip('/') safe_name = member.name.lstrip('/')
extract_path = pkg_extract_dir / safe_name
# Resolve to absolute path and verify it's within the extraction directory
real_extract_path = extract_path.resolve()
real_pkg_extract_dir = pkg_extract_dir.resolve()
if not real_extract_path.is_relative_to(real_pkg_extract_dir):
logger.warning(f"Skipping file with path traversal attempt: {member.name}")
continue
man_file = ManFile( man_file = ManFile(
file_path=extract_path, file_path=real_extract_path,
package_name=package.name package_name=package.name
) )
# Apply section filtering
if self.skip_sections and man_file.section in self.skip_sections: if self.skip_sections and man_file.section in self.skip_sections:
logger.debug(f"Skipping {man_file.display_name} (section {man_file.section})") logger.debug(f"Skipping {man_file.display_name} (section {man_file.section})")
continue continue
# Apply language filtering
if self.skip_languages and man_file.language and man_file.language != 'en': if self.skip_languages and man_file.language and man_file.language != 'en':
logger.debug(f"Skipping {man_file.display_name} (language {man_file.language})") logger.debug(f"Skipping {man_file.display_name} (language {man_file.language})")
continue continue
# Extract the file real_extract_path.parent.mkdir(parents=True, exist_ok=True)
extract_path.parent.mkdir(parents=True, exist_ok=True)
try: try:
content = rpm.extractfile(member).read() content = rpm.extractfile(member).read()
with open(extract_path, 'wb') as f: with open(real_extract_path, 'wb') as f:
f.write(content) f.write(content)
man_file.content = content man_file.content = content
@@ -118,13 +123,11 @@ class ManPageExtractor:
all_man_files = [] all_man_files = []
with ThreadPoolExecutor(max_workers=max_workers) as executor: with ThreadPoolExecutor(max_workers=max_workers) as executor:
# Submit all extraction tasks
future_to_pkg = { future_to_pkg = {
executor.submit(self.extract_from_package, pkg): pkg executor.submit(self.extract_from_package, pkg): pkg
for pkg in packages for pkg in packages
} }
# Collect results
for future in as_completed(future_to_pkg): for future in as_completed(future_to_pkg):
pkg = future_to_pkg[future] pkg = future_to_pkg[future]
try: try:
@@ -150,27 +153,15 @@ class ManPageExtractor:
return "" return ""
try: try:
# Try reading as gzipped file first
if man_file.file_path.suffix == '.gz': if man_file.file_path.suffix == '.gz':
with gzip.open(man_file.file_path, 'rb') as f:
content = f.read()
else:
# Read as plain text
with open(man_file.file_path, 'rb') as f:
content = f.read()
# Decode with error handling
return content.decode('utf-8', errors='replace')
except gzip.BadGzipFile:
# Not a gzip file, try reading as plain text
try: try:
with gzip.open(man_file.file_path, 'rb') as f:
return f.read().decode('utf-8', errors='replace')
except gzip.BadGzipFile:
pass
with open(man_file.file_path, 'rb') as f: with open(man_file.file_path, 'rb') as f:
content = f.read() return f.read().decode('utf-8', errors='replace')
return content.decode('utf-8', errors='replace')
except Exception as e:
logger.error(f"Error reading {man_file.file_path}: {e}")
return ""
except Exception as e: except Exception as e:
logger.error(f"Error reading {man_file.file_path}: {e}") logger.error(f"Error reading {man_file.file_path}: {e}")
@@ -178,37 +169,19 @@ class ManPageExtractor:
@staticmethod @staticmethod
def _is_manpage(path: str) -> bool: def _is_manpage(path: str) -> bool:
"""Check if a file path is a man page. """Check if a file path is a man page."""
Args:
path: File path to check
Returns:
True if this looks like a man page file
"""
# Must contain /man/ in path
if '/man/' not in path: if '/man/' not in path:
return False return False
# Should be in /usr/share/man/ or /usr/man/
if not ('/share/man/' in path or path.startswith('/usr/man/')): if not ('/share/man/' in path or path.startswith('/usr/man/')):
return False return False
# Common man page patterns
# - /usr/share/man/man1/foo.1.gz
# - /usr/share/man/es/man1/foo.1.gz
# - /usr/share/man/man3/printf.3.gz
parts = path.split('/') parts = path.split('/')
return any(
# Check for man<digit> directory
has_man_section = any(
part.startswith('man') and len(part) > 3 and part[3].isdigit() part.startswith('man') and len(part) > 3 and part[3].isdigit()
for part in parts for part in parts
) )
return has_man_section
def cleanup_extracts(self, package: Package): def cleanup_extracts(self, package: Package):
"""Clean up extracted files for a package. """Clean up extracted files for a package.

View File

@@ -4,7 +4,7 @@ import gzip
import logging import logging
import xml.etree.ElementTree as ET import xml.etree.ElementTree as ET
from pathlib import Path from pathlib import Path
from typing import Set, Dict from typing import Set
from urllib.parse import urljoin from urllib.parse import urljoin
import requests import requests
@@ -38,19 +38,16 @@ class ContentsParser:
""" """
logger.info(f"Fetching filelists for {self.repo_url}") logger.info(f"Fetching filelists for {self.repo_url}")
# Download and parse repomd.xml to find filelists location
filelists_path = self._get_filelists_path() filelists_path = self._get_filelists_path()
if not filelists_path: if not filelists_path:
logger.warning("Could not find filelists in repository metadata") logger.warning("Could not find filelists in repository metadata")
return set() return set()
# Download filelists.xml
filelists_file = self._download_filelists(filelists_path) filelists_file = self._download_filelists(filelists_path)
if not filelists_file: if not filelists_file:
logger.warning("Could not download filelists") logger.warning("Could not download filelists")
return set() return set()
# Parse filelists to find packages with man pages
packages = self._parse_filelists(filelists_file) packages = self._parse_filelists(filelists_file)
logger.info(f"Found {len(packages)} packages with man pages") logger.info(f"Found {len(packages)} packages with man pages")
@@ -68,11 +65,7 @@ class ContentsParser:
response = requests.get(repomd_url, timeout=30) response = requests.get(repomd_url, timeout=30)
response.raise_for_status() response.raise_for_status()
# Parse XML
root = ET.fromstring(response.content) root = ET.fromstring(response.content)
# Find filelists entry
# XML structure: <repomd><data type="filelists"><location href="..."/></data></repomd>
ns = {'repo': 'http://linux.duke.edu/metadata/repo'} ns = {'repo': 'http://linux.duke.edu/metadata/repo'}
for data in root.findall('repo:data', ns): for data in root.findall('repo:data', ns):
@@ -81,7 +74,7 @@ class ContentsParser:
if location is not None: if location is not None:
return location.get('href') return location.get('href')
# Fallback: try without namespace # Fallback without namespace
for data in root.findall('data'): for data in root.findall('data'):
if data.get('type') == 'filelists': if data.get('type') == 'filelists':
location = data.find('location') location = data.find('location')
@@ -105,7 +98,6 @@ class ContentsParser:
url = urljoin(self.repo_url, relative_path) url = urljoin(self.repo_url, relative_path)
cache_file = self.cache_dir / relative_path.split('/')[-1] cache_file = self.cache_dir / relative_path.split('/')[-1]
# Return cached file if it exists
if cache_file.exists(): if cache_file.exists():
logger.debug(f"Using cached filelists: {cache_file}") logger.debug(f"Using cached filelists: {cache_file}")
return cache_file return cache_file
@@ -138,36 +130,26 @@ class ContentsParser:
packages = set() packages = set()
try: try:
# Open gzipped XML file
with gzip.open(filelists_path, 'rb') as f: with gzip.open(filelists_path, 'rb') as f:
# Use iterparse for memory efficiency (files can be large)
context = ET.iterparse(f, events=('start', 'end')) context = ET.iterparse(f, events=('start', 'end'))
current_package = None current_package = None
has_manpage = False has_manpage = False
for event, elem in context: for event, elem in context:
if event == 'start': if event == 'start' and elem.tag.endswith('package'):
if elem.tag.endswith('package'):
# Get package name from 'name' attribute
current_package = elem.get('name') current_package = elem.get('name')
has_manpage = False has_manpage = False
elif event == 'end': elif event == 'end':
if elem.tag.endswith('file'): if elem.tag.endswith('file'):
# Check if file path contains /man/
file_path = elem.text file_path = elem.text
if file_path and '/man/' in file_path: if file_path and self._is_manpage_path(file_path):
# Could be /usr/share/man/ or /usr/man/
if '/share/man/' in file_path or file_path.startswith('/usr/man/'):
has_manpage = True has_manpage = True
elif elem.tag.endswith('package'): elif elem.tag.endswith('package'):
# End of package entry
if has_manpage and current_package: if has_manpage and current_package:
packages.add(current_package) packages.add(current_package)
# Clear element to free memory
elem.clear() elem.clear()
current_package = None current_package = None
has_manpage = False has_manpage = False
@@ -177,45 +159,16 @@ class ContentsParser:
return packages return packages
def get_package_man_files(self, filelists_path: Path) -> Dict[str, list]: @staticmethod
"""Get detailed list of man files for each package. def _is_manpage_path(file_path: str) -> bool:
"""Check if a file path is a man page location.
Args: Args:
filelists_path: Path to filelists.xml.gz file file_path: File path to check
Returns: Returns:
Dict mapping package name to list of man page paths True if path is in a standard man page directory
""" """
packages = {} return '/man/' in file_path and (
'/share/man/' in file_path or file_path.startswith('/usr/man/')
try: )
with gzip.open(filelists_path, 'rb') as f:
context = ET.iterparse(f, events=('start', 'end'))
current_package = None
current_files = []
for event, elem in context:
if event == 'start':
if elem.tag.endswith('package'):
current_package = elem.get('name')
current_files = []
elif event == 'end':
if elem.tag.endswith('file'):
file_path = elem.text
if file_path and '/share/man/' in file_path:
current_files.append(file_path)
elif elem.tag.endswith('package'):
if current_files and current_package:
packages[current_package] = current_files
elem.clear()
current_package = None
current_files = []
except Exception as e:
logger.error(f"Error parsing filelists: {e}")
return packages

View File

@@ -52,7 +52,6 @@ class RepoManager:
self.cache_dir.mkdir(parents=True, exist_ok=True) self.cache_dir.mkdir(parents=True, exist_ok=True)
self.download_dir.mkdir(parents=True, exist_ok=True) self.download_dir.mkdir(parents=True, exist_ok=True)
# Initialize DNF
self.base = dnf.Base() self.base = dnf.Base()
self.base.conf.debuglevel = 0 self.base.conf.debuglevel = 0
self.base.conf.errorlevel = 0 self.base.conf.errorlevel = 0
@@ -67,28 +66,23 @@ class RepoManager:
repo = dnf.repo.Repo(repo_id, self.base.conf) repo = dnf.repo.Repo(repo_id, self.base.conf)
repo.baseurl = [self.repo_url] repo.baseurl = [self.repo_url]
repo.enabled = True repo.enabled = True
repo.gpgcheck = False # We verify checksums separately repo.gpgcheck = False
self.base.repos.add(repo) self.base.repos.add(repo)
logger.info(f"Configured repository: {repo_id} at {self.repo_url}") logger.info(f"Configured repository: {repo_id} at {self.repo_url}")
# Fill the sack (package database)
self.base.fill_sack(load_system_repo=False, load_available_repos=True) self.base.fill_sack(load_system_repo=False, load_available_repos=True)
logger.info("Repository metadata loaded") logger.info("Repository metadata loaded")
def discover_packages_with_manpages(self) -> Set[str]: def discover_packages_with_manpages(self) -> Set[str]:
"""Discover which packages contain man pages using filelists. """Discover which packages contain man pages using filelists.
This is the key optimization - we parse repository metadata
to identify packages with man pages before downloading anything.
Returns: Returns:
Set of package names that contain man pages Set of package names that contain man pages
""" """
if self.packages_with_manpages is not None: if self.packages_with_manpages is not None:
return self.packages_with_manpages return self.packages_with_manpages
# Try pub first, then vault if it fails
content_dirs = ["pub/rocky", "vault/rocky"] content_dirs = ["pub/rocky", "vault/rocky"]
for content_dir in content_dirs: for content_dir in content_dirs:
original_content_dir = self.config.content_dir original_content_dir = self.config.content_dir
@@ -99,9 +93,9 @@ class RepoManager:
) )
parser = ContentsParser(repo_url, self.cache_dir) parser = ContentsParser(repo_url, self.cache_dir)
packages = parser.get_packages_with_manpages() packages = parser.get_packages_with_manpages()
if packages: # Only use if it has man pages if packages:
self.packages_with_manpages = packages self.packages_with_manpages = packages
self.repo_url = repo_url # Set for later use self.repo_url = repo_url
logger.info(f"Using repository: {repo_url}") logger.info(f"Using repository: {repo_url}")
break break
else: else:
@@ -130,39 +124,29 @@ class RepoManager:
f"Querying packages from {self.repo_type} ({self.version}/{self.arch})" f"Querying packages from {self.repo_type} ({self.version}/{self.arch})"
) )
# Get packages with man pages if filtering
manpage_packages = None manpage_packages = None
if with_manpages_only: if with_manpages_only:
manpage_packages = self.discover_packages_with_manpages() manpage_packages = self.discover_packages_with_manpages()
logger.info(f"Filtering to {len(manpage_packages)} packages with man pages") logger.info(f"Filtering to {len(manpage_packages)} packages with man pages")
# Configure DNF repo now that we have the correct repo_url
self._configure_repo() self._configure_repo()
packages = [] packages = []
# Query all available packages
query = self.base.sack.query().available() query = self.base.sack.query().available()
# For each package name, get only one arch (prefer noarch, then our target arch)
seen_names = set() seen_names = set()
for pkg in query: for pkg in query:
pkg_name = pkg.name pkg_name = pkg.name
# Skip if we've already added this package
if pkg_name in seen_names: if pkg_name in seen_names:
continue continue
# Skip if filtering and package doesn't have man pages
if manpage_packages and pkg_name not in manpage_packages: if manpage_packages and pkg_name not in manpage_packages:
continue continue
# Get repo information
repo = pkg.repo repo = pkg.repo
baseurl = repo.baseurl[0] if repo and repo.baseurl else self.repo_url baseurl = repo.baseurl[0] if repo and repo.baseurl else self.repo_url
chksum_type, chksum_value = pkg.chksum if pkg.chksum else ("sha256", "")
# Create Package object
package = Package( package = Package(
name=pkg_name, name=pkg_name,
version=pkg.version, version=pkg.version,
@@ -171,16 +155,16 @@ class RepoManager:
repo_type=self.repo_type, repo_type=self.repo_type,
location=pkg.location, location=pkg.location,
baseurl=baseurl, baseurl=baseurl,
checksum=pkg.chksum[1] if pkg.chksum else "", # chksum is (type, value) checksum=chksum_value,
checksum_type=pkg.chksum[0] if pkg.chksum else "sha256", checksum_type=chksum_type,
has_manpages=True if manpage_packages else False, has_manpages=bool(manpage_packages),
) )
packages.append(package) packages.append(package)
seen_names.add(pkg_name) seen_names.add(pkg_name)
logger.info(f"Found {len(packages)} packages to process") logger.info(f"Found {len(packages)} packages to process")
return sorted(packages) # Sort by name for consistent ordering return sorted(packages)
def download_package(self, package: Package) -> bool: def download_package(self, package: Package) -> bool:
"""Download a single package. """Download a single package.
@@ -194,7 +178,6 @@ class RepoManager:
download_path = self.download_dir / package.filename download_path = self.download_dir / package.filename
package.download_path = download_path package.download_path = download_path
# Skip if already downloaded
if download_path.exists(): if download_path.exists():
logger.debug(f"Package already downloaded: {package.filename}") logger.debug(f"Package already downloaded: {package.filename}")
return True return True
@@ -204,7 +187,6 @@ class RepoManager:
response = requests.get(package.download_url, timeout=300, stream=True) response = requests.get(package.download_url, timeout=300, stream=True)
response.raise_for_status() response.raise_for_status()
# Download with progress (optional: could add progress bar here)
with open(download_path, "wb") as f: with open(download_path, "wb") as f:
for chunk in response.iter_content(chunk_size=8192): for chunk in response.iter_content(chunk_size=8192):
if chunk: if chunk:
@@ -215,7 +197,6 @@ class RepoManager:
except Exception as e: except Exception as e:
logger.error(f"Error downloading {package.filename}: {e}") logger.error(f"Error downloading {package.filename}: {e}")
# Clean up partial download
if download_path.exists(): if download_path.exists():
download_path.unlink() download_path.unlink()
return False return False
@@ -235,12 +216,10 @@ class RepoManager:
downloaded = [] downloaded = []
with ThreadPoolExecutor(max_workers=max_workers) as executor: with ThreadPoolExecutor(max_workers=max_workers) as executor:
# Submit all download tasks
future_to_pkg = { future_to_pkg = {
executor.submit(self.download_package, pkg): pkg for pkg in packages executor.submit(self.download_package, pkg): pkg for pkg in packages
} }
# Process completed downloads
for future in as_completed(future_to_pkg): for future in as_completed(future_to_pkg):
pkg = future_to_pkg[future] pkg = future_to_pkg[future]
try: try:

View File

@@ -24,31 +24,26 @@ class Config:
parallel_conversions: Number of parallel HTML conversions parallel_conversions: Number of parallel HTML conversions
""" """
# Repository configuration
base_url: str = "http://dl.rockylinux.org/" base_url: str = "http://dl.rockylinux.org/"
content_dir: str = "pub/rocky" content_dir: str = "pub/rocky"
versions: List[str] = None versions: List[str] = None
architectures: List[str] = None architectures: List[str] = None
repo_types: List[str] = None repo_types: List[str] = None
# Directory configuration
download_dir: Path = Path("/data/tmp/downloads") download_dir: Path = Path("/data/tmp/downloads")
extract_dir: Path = Path("/data/tmp/extracts") extract_dir: Path = Path("/data/tmp/extracts")
output_dir: Path = Path("/data/html") output_dir: Path = Path("/data/html")
# Cleanup options
keep_rpms: bool = False keep_rpms: bool = False
keep_extracts: bool = False keep_extracts: bool = False
# Performance options
parallel_downloads: int = 5 parallel_downloads: int = 5
parallel_conversions: int = 10 parallel_conversions: int = 10
# Filtering options
skip_sections: List[str] = None skip_sections: List[str] = None
skip_packages: List[str] = None skip_packages: List[str] = None
skip_languages: bool = True # Skip non-English languages by default skip_languages: bool = True
allow_all_sections: bool = False # Override skip_sections if True allow_all_sections: bool = False
def __post_init__(self): def __post_init__(self):
"""Set defaults and ensure directories exist.""" """Set defaults and ensure directories exist."""
@@ -56,20 +51,16 @@ class Config:
self.versions = ["8.10", "9.6", "10.0"] self.versions = ["8.10", "9.6", "10.0"]
if self.architectures is None: if self.architectures is None:
# Man pages are arch-independent, so we just need one
# We prefer x86_64 as it's most common, fallback to others
self.architectures = ["x86_64", "aarch64", "ppc64le", "s390x"] self.architectures = ["x86_64", "aarch64", "ppc64le", "s390x"]
if self.repo_types is None: if self.repo_types is None:
self.repo_types = ["BaseOS", "AppStream"] self.repo_types = ["BaseOS", "AppStream"]
# Set default skip sections (man3 library APIs)
if self.skip_sections is None and not self.allow_all_sections: if self.skip_sections is None and not self.allow_all_sections:
self.skip_sections = ["3", "3p", "3pm"] self.skip_sections = ["3", "3p", "3pm"]
elif self.allow_all_sections: elif self.allow_all_sections:
self.skip_sections = [] self.skip_sections = []
# Set default skip packages (high-volume API docs)
if self.skip_packages is None: if self.skip_packages is None:
self.skip_packages = [ self.skip_packages = [
"lapack", "lapack",
@@ -77,7 +68,6 @@ class Config:
"gl-manpages", "gl-manpages",
] ]
# Ensure all paths are Path objects
self.download_dir = Path(self.download_dir) self.download_dir = Path(self.download_dir)
self.extract_dir = Path(self.extract_dir) self.extract_dir = Path(self.extract_dir)
self.output_dir = Path(self.output_dir) self.output_dir = Path(self.output_dir)

View File

@@ -3,6 +3,7 @@
import gzip import gzip
import json import json
import logging import logging
from collections import defaultdict
from pathlib import Path from pathlib import Path
from typing import List, Dict, Any from typing import List, Dict, Any
@@ -33,7 +34,6 @@ class WebGenerator:
self.output_dir = Path(output_dir) self.output_dir = Path(output_dir)
self.output_dir.mkdir(parents=True, exist_ok=True) self.output_dir.mkdir(parents=True, exist_ok=True)
# Setup Jinja2 environment
self.env = Environment( self.env = Environment(
loader=FileSystemLoader(str(self.template_dir)), loader=FileSystemLoader(str(self.template_dir)),
autoescape=select_autoescape(["html", "xml"]), autoescape=select_autoescape(["html", "xml"]),
@@ -66,7 +66,6 @@ class WebGenerator:
content=man_file.html_content, content=man_file.html_content,
) )
# Ensure output path is set
if not man_file.html_path: if not man_file.html_path:
man_file.html_path = self._get_manpage_path(man_file, version) man_file.html_path = self._get_manpage_path(man_file, version)
@@ -127,24 +126,18 @@ class WebGenerator:
True if successful True if successful
""" """
try: try:
# Group packages by first letter packages_by_letter = defaultdict(list)
packages_by_letter = {}
for pkg_name, pages in search_data.items(): for pkg_name, pages in search_data.items():
first_char = pkg_name[0].upper() first_char = pkg_name[0].upper()
if not first_char.isalpha(): if not first_char.isalpha():
first_char = "other" first_char = "other"
if first_char not in packages_by_letter:
packages_by_letter[first_char] = []
packages_by_letter[first_char].append( packages_by_letter[first_char].append(
{"name": pkg_name, "count": len(pages)} {"name": pkg_name, "count": len(pages)}
) )
# Sort packages within each letter for packages in packages_by_letter.values():
for letter in packages_by_letter: packages.sort(key=lambda x: x["name"])
packages_by_letter[letter].sort(key=lambda x: x["name"])
template = self.env.get_template("packages.html") template = self.env.get_template("packages.html")
@@ -188,7 +181,6 @@ class WebGenerator:
if pkg_name not in index: if pkg_name not in index:
index[pkg_name] = {} index[pkg_name] = {}
# Create entry for this man page
entry = { entry = {
"name": man_file.name, "name": man_file.name,
"section": man_file.section, "section": man_file.section,
@@ -198,7 +190,6 @@ class WebGenerator:
"full_name": f"{man_file.package_name} - {man_file.display_name}", "full_name": f"{man_file.package_name} - {man_file.display_name}",
} }
# Use display name as key (handles duplicates with different sections)
key = man_file.display_name key = man_file.display_name
if man_file.language: if man_file.language:
key = f"{key}.{man_file.language}" key = f"{key}.{man_file.language}"
@@ -223,15 +214,11 @@ class WebGenerator:
json_path = version_dir / "search.json" json_path = version_dir / "search.json"
gz_path = version_dir / "search.json.gz" gz_path = version_dir / "search.json.gz"
# Sort for consistency
sorted_index = {k: index[k] for k in sorted(index)} sorted_index = {k: index[k] for k in sorted(index)}
# Save plain JSON
with open(json_path, "w", encoding="utf-8") as f: with open(json_path, "w", encoding="utf-8") as f:
json.dump(sorted_index, f, indent=2) json.dump(sorted_index, f, indent=2)
# Save gzipped JSON
with gzip.open(gz_path, "wt", encoding="utf-8") as f: with gzip.open(gz_path, "wt", encoding="utf-8") as f:
json.dump(sorted_index, f) json.dump(sorted_index, f)
@@ -270,22 +257,18 @@ class WebGenerator:
try: try:
template = self.env.get_template("root.html") template = self.env.get_template("root.html")
# Group versions by major version major_to_minors = defaultdict(list)
major_to_minors = {}
for v in versions: for v in versions:
try: try:
major, minor = v.split(".") major, minor = v.split(".")
major_to_minors.setdefault(major, []).append(minor) major_to_minors[major].append(minor)
except ValueError: except ValueError:
continue # Skip invalid versions continue
# Sort majors ascending, minors descending within each major
sorted_majors = sorted(major_to_minors, key=int) sorted_majors = sorted(major_to_minors, key=int)
max_minors = max(len(major_to_minors[major]) for major in sorted_majors) max_minors = max((len(major_to_minors[m]) for m in sorted_majors), default=0)
num_columns = len(sorted_majors) num_columns = len(sorted_majors)
# Create rows for grid layout (each row has one version from each major)
# This creates the data structure for proper column grouping
version_rows = [] version_rows = []
for minor_idx in range(max_minors): for minor_idx in range(max_minors):
row = [] row = []
@@ -294,7 +277,7 @@ class WebGenerator:
if minor_idx < len(minors_list): if minor_idx < len(minors_list):
row.append((major, minors_list[minor_idx])) row.append((major, minors_list[minor_idx]))
else: else:
row.append(None) # Placeholder for empty cells row.append(None)
version_rows.append(row) version_rows.append(row)
html = template.render( html = template.render(
@@ -312,3 +295,28 @@ class WebGenerator:
except Exception as e: except Exception as e:
logger.error(f"Error generating root index: {e}") logger.error(f"Error generating root index: {e}")
return False return False
def generate_404_page(self) -> bool:
"""Generate 404 error page.
Returns:
True if successful
"""
try:
template = self.env.get_template("404.html")
html = template.render(
title="404 - Page Not Found"
)
error_path = self.output_dir / "404.html"
with open(error_path, "w", encoding="utf-8") as f:
f.write(html)
logger.info("Generated 404 page")
return True
except Exception as e:
logger.error(f"Error generating 404 page: {e}")
return False

137
templates/404.html Normal file
View File

@@ -0,0 +1,137 @@
{% extends "base.html" %}
{% block header_title %}Rocky Linux Man Pages{% endblock %}
{% block header_subtitle %}Man page documentation for Rocky Linux packages{% endblock %}
{% block extra_css %}
.error-container {
text-align: center;
padding: 4rem 2rem;
}
.error-code {
font-size: 8rem;
font-weight: 700;
color: var(--accent-primary);
line-height: 1;
margin-bottom: 1rem;
font-family: "JetBrains Mono", monospace;
}
.error-message {
font-size: 1.5rem;
color: var(--text-primary);
margin-bottom: 1rem;
}
.error-description {
color: var(--text-secondary);
margin-bottom: 2rem;
max-width: 600px;
margin-left: auto;
margin-right: auto;
}
.suggestions {
max-width: 600px;
margin: 2rem auto;
text-align: left;
}
.suggestions h3 {
color: var(--text-primary);
margin-bottom: 1rem;
}
.suggestions ul {
list-style: none;
padding: 0;
}
.suggestions li {
margin-bottom: 0.75rem;
padding-left: 1.5rem;
position: relative;
}
.suggestions li::before {
content: "→";
position: absolute;
left: 0;
color: var(--accent-primary);
}
.back-button {
display: inline-block;
padding: 0.75rem 1.5rem;
background: var(--accent-primary);
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: 500;
transition: all 0.2s;
margin-top: 2rem;
}
.back-button:hover {
background: var(--accent-secondary);
transform: translateY(-2px);
text-decoration: none;
}
@media (max-width: 768px) {
.error-code {
font-size: 5rem;
}
.error-message {
font-size: 1.25rem;
}
.error-container {
padding: 3rem 1rem;
}
}
@media (max-width: 480px) {
.error-code {
font-size: 4rem;
}
.error-message {
font-size: 1.1rem;
}
.error-container {
padding: 2rem 1rem;
}
.suggestions {
padding: 0 1rem;
}
}
{% endblock %}
{% block content %}
<div class="content">
<div class="error-container">
<div class="error-code">404</div>
<div class="error-message">Page Not Found</div>
<div class="error-description">
The page you're looking for doesn't exist or may have been moved.
</div>
<div class="suggestions">
<h3>Suggestions:</h3>
<ul>
<li>Check the URL for typos</li>
<li>Return to the <a href="/">home page</a> and navigate from there</li>
<li>Use the search feature on the version index page</li>
<li>The man page may be in a different version of Rocky Linux</li>
</ul>
</div>
<a href="/" class="back-button">Go to Home Page</a>
</div>
</div>
{% endblock %}

View File

@@ -174,6 +174,8 @@
<div class="version-browse">Browse man pages →</div> <div class="version-browse">Browse man pages →</div>
{% endif %} {% endif %}
</a> </a>
{% else %}
<div></div>
{% endif %} {% endif %}
{% endfor %} {% endfor %}
{% endfor %} {% endfor %}