怎样使用ffmpeg天生视频缩略图

分享
计算机软件开发 2024-9-6 15:00:43 64 0 来自 中国
焦点思绪

使用ffmpeg获取视频的第一帧关键帧,转换成UIImage,然后生存成jpg图片。如果不必要长期化,直接使用UIImage对象即可
ffmpeg手动集成

我直接使用了ffmpeg-kit进行ffmpeg的打包,打包脚本如下
ffmpeg-kit/tools/release/ios.sh末了可以在以下目次找到产物
ffmpeg-kit/prebuilt/bundle-apple-cocoapods-ios/ffmpeg-kit-ios-min/Podfile指向该目次下的ffmpeg-kit-ios-min.podspec即可,大概传到自己的git repo上。
代码实现

使用ffmpeg打开视频文件

AVFormatContext *context = avformat_alloc_context();// 通过文件创建AVFormatContextint ret;ret = avformat_open_input(&context, [videoPath UTF8String], NULL, NULL);if (ret != 0) goto free_res;// 探求流信息ret = avformat_find_stream_info(context, NULL);if (ret != 0) goto free_res;探求视频流

// 探求视频流AVStream *videoStream = NULL;int videoStreamIndex = -1;for (int i = 0; i < context->nb_streams; ++i) {  if (context->streams->codecpar->codec_type == AVMEDIA_TYPE_VIDEO) {    videoStream = context->streams;    videoStreamIndex = i;    break;  }}if (!videoStream) goto free_res;这里还存了一下视频流的索引值,方便后续比对
创建解码器

// 创建视频解码器const AVCodec *videoCodec = avcodec_find_decoder(videoStream->codecpar->codec_id);AVCodecContext *videoCodecContext = avcodec_alloc_context3(videoCodec);avcodec_parameters_to_context(videoCodecContext, videoStream->codecpar);ret = avcodec_open2(videoCodecContext, videoCodec, NULL);if (ret != 0) goto free_res;读取第一帧视频I帧

AVPacket *firstPacket = av_packet_alloc();AVFrame *rawFrame = av_frame_alloc();while(av_read_frame(context, firstPacket) == 0) {    if (firstPacket->stream_index == videoStreamIndex) {        avcodec_send_packet(videoCodecContext, firstPacket);        avcodec_receive_frame(videoCodecContext, rawFrame);        if (rawFrame->pict_type == AV_PICTURE_TYPE_I) {            break;        }    }}使用sws_scale缩放图片并进行格式转换

int width = rawFrame->width;int height = rawFrame->height;int bitsPerComponent = 8;int bitsPerPixel = bitsPerComponent * 4;int bytesPerRow = 4 * width;CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();NSMutableData *rgbaData = [NSMutableData.alloc initWithLength:rawFrame->width * rawFrame->height * 4];    void *dstAddress = (void *)rgbaData.bytes;//  使用sws处置惩罚图片struct SwsContext *swsContext = sws_getContext(rawFrame->width, rawFrame->height, rawFrame->format, rawFrame->width, rawFrame->height, AV_PIX_FMT_RGBA, SWS_BILINEAR, NULL, NULL, NULL);sws_scale(swsContext,              (const uint8_t *const *) rawFrame->data,                rawFrame->linesize,              0,          rawFrame->height,          (uint8_t *const *)&dstAddress,          &bytesPerRow);这里将AVFrame的图片转换成rgba的像素格式,数据存储到rgbaData中
rgba数据转换成UIImage

CFDataRef rgbDataRef = (__bridge CFDataRef)rgbaData;CGDataProviderRef provider = CGDataProviderCreateWithCFData(rgbDataRef);CGImageRef cgImage = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpace, kCGImageAlphaLast | kCGBitmapByteOrderDefault, provider, NULL, YES, kCGRenderingIntentDefault);UIImage *img = [UIImage.alloc initWithCGImage:cgImage];CGImageRelease(cgImage);UIImage转换成NSData生存到本地

NSData *imgData = UIImageJPEGRepresentation(img, 0.8);[imgData writeToFile:destPath atomically:YES];开释相干对象

CGDataProviderRelease(provider);CFRelease(colorSpace);sws_freeContext(swsContext);avcodec_free_context(&videoCodecContext);av_packet_free(&firstPacket);av_frame_free(&rawFrame);avformat_free_context(context);
您需要登录后才可以回帖 登录 | 立即注册

Powered by CangBaoKu v1.0 小黑屋藏宝库It社区( 冀ICP备14008649号 )

GMT+8, 2024-10-19 13:19, Processed in 0.177974 second(s), 32 queries.© 2003-2025 cbk Team.

快速回复 返回顶部 返回列表