Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Pull Request for DL-Simplified 💡
Issue Title : American Sign Language Detection #312
JWOC Participant
)SSOC-2023 Participant
Closes: #312
Describe the add-ons or changes you've made 📃
I chose convolutional neural networks (CNNs) for the multi-instance classification situation since the dataset consisted of 36 different classes of sign language from (A-Z) to (0-9), and each class needed to be accurately identified to improve the overall accuracy of the model.
I decided to utilize four pre-trained models on the ImageNet dataset: VGG16, InceptionResNetV2, InceptionV3, and MobileNet.
VGG16 is a deep CNN with 16 layers and is well-known for its exceptional performance in image recognition tasks. It has been widely used as a benchmark model in computer vision research.
InceptionResNetV2 combines the concepts of the Inception and ResNet models. It is a powerful CNN architecture with improved accuracy and efficiency.
InceptionV3 is another CNN architecture that has been successful in image classification tasks. It incorporates inception modules to capture multi-scale features.
MobileNet is specifically designed for mobile and embedded devices with limited resources. It offers a good balance between accuracy and computational efficiency.
Each of these models has its own strengths and is suitable for different scenarios depending on the available resources and desired trade-offs between accuracy and computational requirements.
In the "models" folder, I have included all the models within separate ipynb files for all the models that i have made .
Type of change ☑️
What sort of change have you made:
How Has This Been Tested? ⚙️
i tested it by making a classification report after fitting the models to the certain epochs required and then copying a file path to
predict_images( )
and displaying the correct prediction and each time it matched the class name of the image directorythis is the accuracy comparison of different models
Accuracy Comparison
Model
Accuracy
these are the predicted labels
Checklist: ☑️