CelebA Object Detection (YoloV7)
Dataset Script
Key components
YoloV7 utils
from code_loader.helpers.detection.yolo.decoder import Decoder
from code_loader.helpers.detection.yolo.utils import scale_loc_prediction, reshape_output_list
from code_loader.helpers.detection.yolo.grid import Grid
from code_loader.helpers.detection.yolo.loss import YoloLoss
from code_loader.helpers.detection.utils import xywh_to_xyxy_format, xyxy_to_xywh_format, jaccard
# -------------------------------------OD Functions ----------------------------------- #
CATEGORIES = ['face'] # class names
BACKGROUND_LABEL = 1
MAX_BB_PER_IMAGE = 30
CLASSES = 1
IMAGE_SIZE = (640, 640)
FEATURE_MAPS = ((80, 80), (40, 40), (20, 20))
BOX_SIZES = (((10, 13), (16, 30), (33, 23)),
((30, 61), (62, 45), (59, 119)),
((116, 90), (156, 198), (373, 326))) #tiny fd
NUM_FEATURES = len(FEATURE_MAPS)
NUM_PRIORS = len(BOX_SIZES[0]) * len(BOX_SIZES) #[3*3]
OFFSET = 0
STRIDES = (8, 16, 32)
CONF_THRESH = 0.35
NMS_THRESH = 0.65
OVERLAP_THRESH = 0.0625 #might need to be 1/16
BOXES_GENERATOR = Grid(image_size=IMAGE_SIZE, feature_maps=FEATURE_MAPS, box_sizes=BOX_SIZES,
strides=STRIDES, offset=OFFSET)
DEFAULT_BOXES = BOXES_GENERATOR.generate_anchors()
LOSS_FN = YoloLoss(num_classes=CLASSES, overlap_thresh=OVERLAP_THRESH,
default_boxes=DEFAULT_BOXES, background_label=BACKGROUND_LABEL,
from_logits=False , weights=[4.0, 1.0, 0.4], max_match_per_gt=10)
DECODER = Decoder(CLASSES,
background_label=BACKGROUND_LABEL,
top_k=20,
conf_thresh=CONF_THRESH,
nms_thresh=NMS_THRESH,
max_bb_per_layer=MAX_BB_PER_IMAGE,
max_bb=MAX_BB_PER_IMAGE)Preprocess
Input Images
Ground Truth
Complete Dataset Script
Exporting an ONNX Model

Example ONNX model
Model Integration
Removing last node

Setting up the model


Last updated
Was this helpful?

