titanicprediction.core package

Submodules

titanicprediction.core.algorithms module

class titanicprediction.core.algorithms.GradientDescentResult(weights: ndarray, bias: float, loss_history: list[float], convergence_info: dict[str, Any])[source]

Bases: object

Gradient descent result dataclass state.

weights

Final weights after gradient descent optimization.

Type:

numpy.ndarray

bias

Final bias term after optimization.

Type:

float

loss_history

List of loss values during training.

Type:

list[float]

convergence_info

Dictionary containing convergence information.

Type:

dict[str, Any]

__init__(weights: ndarray, bias: float, loss_history: list[float], convergence_info: dict[str, Any]) None
bias: float
convergence_info: dict[str, Any]
loss_history: list[float]
weights: ndarray
titanicprediction.core.algorithms.add_polynomial_features(x: ndarray, degree: int = 3) ndarray[source]

Add polynomial features to the input data.

Parameters:
  • x – Input feature matrix.

  • degree – Degree of polynomial features. Defaults to 2.

Returns:

Transformed feature matrix with polynomial features.

titanicprediction.core.algorithms.binary_cross_entropy(y_true: ndarray, y_pred: ndarray, weights: ndarray, lambda_reg: float = 0.01, epsilon: float = 1e-12) float[source]

Calculate binary cross-entropy loss with regularization.

Parameters:
  • y_true – Ground truth labels.

  • y_pred – Predicted probabilities.

  • weights – Model weights for regularization.

  • lambda_reg – Regularization parameter. Defaults to 0.01.

  • epsilon – Small value to avoid numerical issues. Defaults to 1e-12.

Returns:

Binary cross-entropy loss value with regularization.

titanicprediction.core.algorithms.compute_gradients(x: ndarray, y_true: ndarray, y_pred: ndarray, weights: ndarray, lambda_reg: float = 0.01, class_weight: dict | None = None) tuple[ndarray, float][source]

Compute gradients for logistic regression.

Parameters:
  • x – Input feature matrix.

  • y_true – Ground truth labels.

  • y_pred – Predicted probabilities.

  • weights – Model weights for regularization.

  • lambda_reg – Regularization parameter. Defaults to 0.01.

  • class_weight – Optional class weights dictionary.

Returns:

  • Weight gradients

  • Bias gradient

Return type:

Tuple containing

titanicprediction.core.algorithms.gradient_descent(x: ndarray, y: ndarray, learning_rate: float = 0.001, epochs: int = 1000, convergence_tol: float = 1e-06, beta1: float = 0.9, beta2: float = 0.999, epsilon: float = 1e-08, lambda_reg: float = 0.01) GradientDescentResult[source]

Perform gradient descent optimization with Adam optimizer.

Parameters:
  • x – Input feature matrix.

  • y – Target labels.

  • learning_rate – Learning rate for optimization. Defaults to 0.001.

  • epochs – Maximum number of epochs. Defaults to 1000.

  • convergence_tol – Convergence tolerance. Defaults to 1e-6.

  • beta1 – Adam beta1 parameter. Defaults to 0.9.

  • beta2 – Adam beta2 parameter. Defaults to 0.999.

  • epsilon – Adam epsilon parameter. Defaults to 1e-8.

  • lambda_reg – Regularization parameter. Defaults to 0.01.

Returns:

GradientDescentResult containing optimization results.

titanicprediction.core.algorithms.predict(x: ndarray, weights: ndarray, bias: float, threshold: float | None = None) ndarray[source]

Make binary predictions.

Parameters:
  • x – Input feature matrix.

  • weights – Model weights.

  • bias – Model bias.

  • threshold – Classification threshold. Defaults to 0.5.

Returns:

Binary predictions (0 or 1).

titanicprediction.core.algorithms.predict_proba(x: ndarray, weights: ndarray, bias: float) ndarray[source]

Predict probabilities for binary classification.

Parameters:
  • x – Input feature matrix.

  • weights – Model weights.

  • bias – Model bias.

Returns:

Predicted probabilities for positive class.

titanicprediction.core.algorithms.sigmoid(z: ndarray) ndarray[source]

Sigmoid function.

Create the sigmoid function from float array.

Parameters:

z – Input array.

Returns:

Sigmoid function result.

titanicprediction.core.algorithms.standard_gradient_descent(x: ndarray, y: ndarray, learning_rate: float = 0.01, epochs: int = 1000, convergence_tol: float = 1e-06, beta: float = 0.9, lambda_reg: float = 0.01) GradientDescentResult[source]

Perform standard gradient descent with momentum.

Parameters:
  • x – Input feature matrix.

  • y – Target labels.

  • learning_rate – Learning rate for optimization. Defaults to 0.01.

  • epochs – Maximum number of epochs. Defaults to 1000.

  • convergence_tol – Convergence tolerance. Defaults to 1e-6.

  • beta – Momentum parameter. Defaults to 0.9.

  • lambda_reg – Regularization parameter. Defaults to 0.01.

Returns:

GradientDescentResult containing optimization results.

titanicprediction.core.services module

class titanicprediction.core.services.ConfidenceInterval(lower_bound: float, upper_bound: float, confidence_level: float)[source]

Bases: object

__init__(lower_bound: float, upper_bound: float, confidence_level: float) None
confidence_level: float
lower_bound: float
upper_bound: float
class titanicprediction.core.services.CrossValidationResult(fold_results: list[titanicprediction.core.services.EvaluationResult], mean_accuracy: float, mean_precision: float, mean_recall: float, mean_f1: float, std_accuracy: float, std_precision: float, std_recall: float, std_f1: float)[source]

Bases: object

__init__(fold_results: list[EvaluationResult], mean_accuracy: float, mean_precision: float, mean_recall: float, mean_f1: float, std_accuracy: float, std_precision: float, std_recall: float, std_f1: float) None
fold_results: list[EvaluationResult]
mean_accuracy: float
mean_f1: float
mean_precision: float
mean_recall: float
std_accuracy: float
std_f1: float
std_precision: float
std_recall: float
class titanicprediction.core.services.EvaluationResult(accuracy: float, precision: float, recall: float, f1_score: float, confusion_matrix: numpy.ndarray, classification_report: dict[str, Any])[source]

Bases: object

__init__(accuracy: float, precision: float, recall: float, f1_score: float, confusion_matrix: ndarray, classification_report: dict[str, Any]) None
accuracy: float
classification_report: dict[str, Any]
confusion_matrix: ndarray
f1_score: float
precision: float
recall: float
class titanicprediction.core.services.IModelTrainingService(*args, **kwargs)[source]

Bases: Protocol

__init__(*args, **kwargs)
_abc_impl = <_abc._abc_data object>
_is_protocol = True
cross_validate(dataset: Dataset, config: TrainingConfig, folds: int) CrossValidationResult[source]
evaluate_model(model: TrainedModel, test_data: Dataset) EvaluationResult[source]
train_model(dataset: Dataset, config: TrainingConfig) TrainingResult[source]
class titanicprediction.core.services.IPredictionService(*args, **kwargs)[source]

Bases: Protocol

__init__(*args, **kwargs)
_abc_impl = <_abc._abc_data object>
_is_protocol = True
batch_predict(passengers: list[Passenger]) list[PredictionResult][source]
get_prediction_confidence(prediction: PredictionResult) ConfidenceInterval[source]
predict_survival(passenger: Passenger) PredictionResult[source]
class titanicprediction.core.services.ModelExplanationService(prediction_service: titanicprediction.core.services.PredictionService)[source]

Bases: object

__init__(prediction_service: PredictionService) None
_calculate_feature_impacts(passenger: Passenger) list[FeatureImpactAnalysis][source]
_determine_confidence_level(probability: float) str[source]
_extract_decision_factors(feature_impacts: list[FeatureImpactAnalysis]) list[str][source]
explain_prediction(passenger: Passenger) PredictionExplanation[source]
get_model_statistics(model: TrainedModel) dict[str, Any][source]
prediction_service: PredictionService
class titanicprediction.core.services.ModelTrainingService(preprocessor: DataPreprocessor)[source]

Bases: object

__init__(preprocessor: DataPreprocessor)[source]
_align_features_with_model(features: DataFrame, model: TrainedModel) DataFrame[source]
_calculate_feature_importance(model: TrainedModel) dict[str, float][source]
cross_validate(dataset: Dataset, config: TrainingConfig, folds: int = 5) CrossValidationResult[source]
evaluate_model(model: TrainedModel, test_data: Dataset) EvaluationResult[source]
train_model(dataset: Dataset, config: TrainingConfig) TrainingResult[source]
class titanicprediction.core.services.PredictionResult(passenger: titanicprediction.entities.core.Passenger, probability: float, prediction: bool, confidence: float, timestamp: datetime.datetime)[source]

Bases: object

__init__(passenger: Passenger, probability: float, prediction: bool, confidence: float, timestamp: datetime) None
confidence: float
passenger: Passenger
prediction: bool
probability: float
timestamp: datetime
class titanicprediction.core.services.PredictionService(model: TrainedModel, preprocessor: DataPreprocessor)[source]

Bases: object

__init__(model: TrainedModel, preprocessor: DataPreprocessor)[source]
_align_features(features: ndarray, expected_feature_names: list[str]) ndarray[source]
_calculate_confidence(probability: float) float[source]
_passenger_to_dataframe(passenger: Passenger) DataFrame[source]
batch_predict(passengers: list[Passenger]) list[PredictionResult][source]
get_prediction_confidence(prediction: PredictionResult) ConfidenceInterval[source]
predict_survival(passenger: Passenger) PredictionResult[source]
class titanicprediction.core.services.ServiceFactory[source]

Bases: object

static create_explanation_service(prediction_service: PredictionService) ModelExplanationService[source]
static create_prediction_service(model: TrainedModel, preprocessor: DataPreprocessor) PredictionService[source]
static create_training_service(preprocessor: DataPreprocessor) ModelTrainingService[source]
class titanicprediction.core.services.TrainingConfig(learning_rate: float = 0.01, epochs: int = 1000, test_size: float = 0.2, random_state: int = 42, convergence_tol: float = 1e-06, lambda_reg: float = 0.01, polynomial_degree: int = 2, use_adam: bool = True, beta1: float = 0.9, beta2: float = 0.999, early_stopping_patience: int = 50)[source]

Bases: object

__init__(learning_rate: float = 0.01, epochs: int = 1000, test_size: float = 0.2, random_state: int = 42, convergence_tol: float = 1e-06, lambda_reg: float = 0.01, polynomial_degree: int = 2, use_adam: bool = True, beta1: float = 0.9, beta2: float = 0.999, early_stopping_patience: int = 50) None
beta1: float = 0.9
beta2: float = 0.999
convergence_tol: float = 1e-06
early_stopping_patience: int = 50
epochs: int = 1000
lambda_reg: float = 0.01
learning_rate: float = 0.01
polynomial_degree: int = 2
random_state: int = 42
test_size: float = 0.2
use_adam: bool = True
class titanicprediction.core.services.TrainingResult(model: titanicprediction.entities.core.TrainedModel, training_time: float, final_loss: float, metrics: dict[str, float], learning_curve: list[float], feature_importance: dict[str, float], config: titanicprediction.core.services.TrainingConfig)[source]

Bases: object

__init__(model: TrainedModel, training_time: float, final_loss: float, metrics: dict[str, float], learning_curve: list[float], feature_importance: dict[str, float], config: TrainingConfig) None
config: TrainingConfig
feature_importance: dict[str, float]
final_loss: float
learning_curve: list[float]
metrics: dict[str, float]
model: TrainedModel
training_time: float

Module contents