Python + AI 的 21 天挑戰

很久沒有進行一些甚麼甚麼挑戰了, 不如就在這三星期裏跟大家一起編寫一些跟 AI 應用有關的 Python 程式吧!

今次這個「Python + AI 的 21 天挑戰」旨在跟大家在短短三週內初步掌握 Python 編程和人工智慧的基礎知識。內容涵蓋了從基礎語法到實際應用的廣泛內容,讓大家能夠迅速上手並進行創新應用。

在這 21 的小課程裏涵蓋了以下的範疇:
😺 開發環境準備
😸 語音技術
😹 電腦視覺
😻 手部位置及動作檢測
😼 身體姿勢檢測
😽 面部識別技術
🙀 機器學習

由於每一天的文章都會有一定的長度, 為免整篇篇幅過長, 每一篇都會獨立編寫, 請按超連結收看 ^_^


🟡 Day00 (13 Oct 2024) 今天先預備好 Python 的開發環境吧!

先預備好開發環境, 編程才能得心應手喔!
建置 Python 開發環境 (Windows 10 或以上)

🟡 Day01 (14 Oct 2024) 文字轉語音 Text to Speech

這個文字轉語音的範例使用了 Python 的 pyttsx3 模組來實現文字轉語音(Text-to-Speech, TTS)的功能。這個模組是一個跨平台的 TTS 工具, 支持多種語音合成器。有了 TTS 大家的程式便可以透過語音跟使用者互動了!

This text-to-speech example uses the Python library pyttsx3 to implement text-to-speech (TTS) functionality. This module is a cross-platform TTS tool that supports multiple speech synthesizers. With TTS, programs can interact with users through voice!

請按這裏

🟡 Day02 (15 Oct 2024) 語音識別 Speech Recognition

這個程式主要用於錄製音頻並將其轉換為文字。它首先錄製一段音頻, 然後使用語音識別技術將錄製的音頻轉換為文字。這對於需要將語音內容轉換為文字的應用場景非常有用, 如語音筆記、語音命令等。

This program is primarily used to record audio and convert it into text. It first records a segment of audio, then uses speech recognition technology to convert the recorded audio into text. This is particularly useful for applications that require converting spoken content into written text, such as voice notes and voice commands.

請按這裏

🟡 Day03 (16 Oct 2024) 手部關鍵點檢測 Hand Landmarks Detection

今天的程式是利用 OpenCV 和 MediaPipe 來進行即時的手部偵測和關鍵點繪製, 是這類手部偵測程式的基本型, 之後連續幾天會建基於這個程式逐步增加不同的功能。

Today’s program uses OpenCV and MediaPipe for real-time hand detection and keypoint drawing. It serves as a basic model for hand detection programs. Over the next few days, various features will be gradually added to this program.

請按這裏

🟡 Day04 (17 Oct 2024) 手勢辨識 (以數字代表手勢) Hand Posture Recognition

今天的程式是利用 OpenCV 和 MediaPipe 來進行即時的手部偵測, 並判斷每根手指是否伸出, 然後將手指的狀態以二進制及十進制的形式顯示在畫面左上角, 這個程式能將各種手勢轉化為數字, 能大大地降低辨識手勢的難度。

Today’s program uses OpenCV and MediaPipe for real-time hand detection and determines whether each finger is extended. It then displays the state of the fingers in both binary and decimal forms. This program can convert various gestures into numbers, significantly reducing the difficulty of gesture recognition.

請按這裏

🟡 Day05 (18 Oct 2024) 手勢辨識 (過濾不雅手勢) Hand Posture Recognition

最期待的畫面出現了! 今天的程式是利用 OpenCV 和 MediaPipe 進行即時的手部偵測, 並在偵測到使用者舉起中指時, 於畫面中對應的區域添加馬克賽效果。

Today’s program uses OpenCV and MediaPipe for real-time hand detection. When it detects the user raising their middle finger, it adds a mosaic effect to the corresponding area on the screen.

請按這裏

🟡 Day06 (19 Oct 2024) 計算食指尖與姆指尖之間的距離

今天的程式是利用 OpenCV 和 MediaPipe 進行即時手部偵測, 並計算食指指尖和拇指指尖之間的距離。

Today’s program uses OpenCV and MediaPipe for real-time hand detection and calculates the distance between the tip of the index finger and the tip of the thumb.

請按這裏

🟡 Day07 (20 Oct 2024) 偵測手部是否有拍打的動作

今天的程式是利用 OpenCV 和 MediaPipe 進行即時手部偵測, 並估算手部與攝影機之間的距離, 當偵測到類似拍打的動作, 手掌的顏色便會由藍色變成綠色。

Today’s program uses OpenCV and MediaPipe for real-time hand detection and estimates the distance between the hand and the camera. When a slapping motion is detected, the color of the palm changes from blue to green.

請按這裏

🟡 Day08 (21 Oct 2024) 追蹤手部位置及動作並顯示魔法特效

今天的程式是使用 OpenCV 和 MediaPipe 進行即時手部偵測,並在偵測到手部張開時,顯示旋轉的魔法陣特效。

Today’s program uses OpenCV and MediaPipe for real-time hand detection, and when it detects an open hand, it displays a rotating magic circle effect.

請按這裏

🟡 Day09 (22 Oct 2024) 面部偵測 Face Detection

今天這個程式碼範例展示了如何使用 OpenCV 和 MediaPipe 來進行即時面部偵測。程式首先透過攝影機捕捉即時影像, 並會偵測影像中的面部,並使用 MediaPipe 的繪圖工具在影像上標示出偵測到的面部位置。

Today’s program uses OpenCV and MediaPipe for real-time hand detection and keypoint drawing. It serves as a basic model for hand detection programs. Over the next few days, various features will be gradually added to this program.

請按這裏

🟡 Day10 (23 Oct 2024) 面部關鍵點檢測 Face Landmark Detection

今天這個程式碼範例展示了如何使用 OpenCV 和 MediaPipe 來進行即時面部偵測。程式首先透過攝影機捕捉即時影像, 並會偵測影像中的面部,並使用 MediaPipe 的繪圖工具在影像上標示出偵測到的面部 Landmarks, 比昨天的例子更細緻, 不過需要處理的數據會變得更多。

Today’s code example demonstrates how to perform real-time facial detection using OpenCV and MediaPipe. The program first captures real-time images through the camera and detects faces in the images. It then uses MediaPipe’s drawing tools to mark the detected facial landmarks on the image. This example is more detailed than yesterday’s, but it also involves processing more data.

請按這裏

🟡 Day11 (24 Oct 2024) 虛擬面具

今天的程式利用 OpenCV 和 MediaPipe 模組來進行即時的面部偵測,並將一個面具圖像疊加在偵測到的面部上。這是一個互動性強且有趣的項目,適合初學者加深對電腦視覺的理解。

Today’s program utilizes OpenCV and MediaPipe modules for real-time facial detection and overlays a mask image on the detected face. This is an interactive and fun project, suitable for beginners to deepen their understanding of computer vision.

請按這裏

🟡 Day12 (25 Oct 2024) 身體關鍵點檢測 Pose Landmark Detection

今天的程式是利用 OpenCV 和 MediaPipe 來進行即時偵測人體姿勢關鍵點, 是這類人體姿勢偵測程式的基本型, 之後連續幾天會建基於這個程式逐步增加不同的功能。

Today’s program utilizes OpenCV and MediaPipe to perform real-time detection of human pose key points. This is a basic version of human pose detection programs, and over the next few days, additional features will be incrementally added to this program.

請按這裏

🟡 Day13 (26 Oct 2024) 虛擬試身室

今天的程式是一個互動式的虛擬試衣間系統,使用了電腦視覺和姿態估計技術來實現實時的虛擬試衣效果。它的主要功能是將預先準備好的服裝圖片(上衣和褲子)疊加到攝像頭捕捉的人體影像上,讓使用者可以在螢幕上看到自己穿著不同服裝的樣子。

Today’s program is an interactive virtual fitting room system that utilizes computer vision and pose estimation technology to achieve real-time virtual try-on effects. Its main function is to overlay pre-prepared clothing images (tops and pants) onto the human image captured by the camera, allowing users to see themselves wearing different outfits on the screen.

請按這裏

🟡 Day14 (27 Oct 2024) 運動遊戲

今天的程式是利用 OpenCV 和 MediaPipe 來計算某種運動的重複次數, 例如這個是手臂彎曲運動的比賽, 只要設定使用不同的 Landmarks 便可以修改成不同的運動小遊戲。

Today’s program uses OpenCV and MediaPipe to count repetitions of a certain exercise, such as an arm curl competition. By setting different landmarks, it can be modified into different exercise mini-games.

請按這裏

🟡 Day15 (28 Oct 2024) 情緒辨識

今天的程式是一個即時情緒偵測系統,能夠透過電腦的攝影機即時捕捉使用者的臉部表情,並利用人工智慧技術分析當前的情緒狀態。這個系統結合了電腦視覺和深度學習技術,讓電腦能夠「讀懂」人類的情緒。

Today’s program is a real-time emotion detection system that can capture users’ facial expressions through a computer’s camera and analyze the current emotional state using artificial intelligence technology. This system combines computer vision and deep learning techniques, enabling computers to “understand” human emotions.

請按這裏

🟡 Day16 (29 Oct 2024) 物件追蹤

今天的程式是一個很有趣的即時物件追蹤程式呢, 不過使用上要注意一下, 請在程式開始後按 S, 然後用滑鼠在物件的左上角點擊並拖放到物件的右下角再按 Enter 確定, 讓程式學習需要追蹤的物件是甚麼, 然後程式就一直會追蹤著那件物件的了。

Today’s program is a very interesting real-time object tracking program. However, please pay attention to the usage: after starting the program, press S, then click and drag with the mouse from the top left corner of the object to the bottom right corner, and press Enter to confirm. This allows the program to learn what object needs to be tracked, and then it will continuously track that object.

請按這裏

🟡 Day17 (30 Oct 2024) 偵測移動物體

今天的程式示範了如何使用 Python 的 OpenCV 函式庫進行實時影像處理,特別是運動檢測。通過從攝影機捕獲影像,程式碼能夠識別並標示出影像中的運動物體,這在安全監控、智能家居及其他自動化應用中具有廣泛的應用潛力。

Today’s program demonstrates how to use the OpenCV library in Python for real-time image processing, particularly motion detection. By capturing images from a camera, the code can identify and mark moving objects in the images, which has broad application potential in security monitoring, smart homes, and other automation applications.

請按這裏

🟡 Day18 (31 Oct 2024) 透過 Whatsapp 發出訊息

今天的程式使用 pywhatkit 模組來自動發送 WhatsApp 訊息。透過簡單的函數調用,使用者可以指定接收者的電話號碼、訊息內容以及發送時間。這對於需要定時發送訊息的應用場景非常有用,例如提醒、通知或自動化的溝通。

Today’s program uses the pywhatkit module to automatically send WhatsApp messages. Through simple function calls, users can specify the recipient’s phone number, message content, and sending time. This is very useful for scenarios that require scheduled message sending, such as reminders, notifications, or automated communication.

請按這裏

🟡 Day19 (1 Nov 2024) Chat with AI 用語音跟 AI 聊天

今天的程式是一個結合語音識別、人工智慧對話及語音合成的互動系統。使用者可以通過語音進行輸入,系統會自動將語音轉換為文字,送到 AI 進行處理,然後將 AI 的回應以語音方式播放出來,創造一個自然流暢的人機對話體驗。

Today’s program is an interactive system combining speech recognition, artificial intelligence dialogue, and speech synthesis. Users can input via voice, and the system automatically converts the speech into text, which is then processed by AI. The AI’s response is played back in voice form, creating a natural and seamless human-computer interaction experience.

請按這裏

🟡 Day20 (2 Nov 2024) Chat with local AI ollama

今天的程式展示了如何使用 Python 和 Ollama 在自己的電腦創建一個具有特定個性的 AI 助手。

Today’s program demonstrates how to use Python and Ollama to create a personalized AI assistant on your own computer.

請按這裏

🟡 Day21 (3 Nov 2024) [Part 1] 在 Teachable Machine 訓練模型

今天的 Part 1 將會教大家使用 Teachable Machine 訓練自己的圖像辨識模型, 然後以 Tensorflow Lite 格式滙出並放在 Part 2 使用。

Today’s Part 1 will guide you on how to use Teachable Machine to train your own image recognition model, then export it in TensorFlow Lite format for use in Part 2.

請按這裏

🟡 Day21 (3 Nov 2024) [Part 2] 運用 Teachable Machine 訓練出來的模型

今天的 Part 2 將會使用之前自製的圖像辨識模型來編一個能辨識物件的 Python 程式。

In today’s Part 2, we will use the previously created image recognition model to develop a Python program capable of identifying objects.

請按這裏

後記:這次的 Python 21 日挑戰終於完成了!希望這些 Python 範例會幫助到大家吧!