iOS 三种画板实现方式

源码 2024-9-27 15:22:56 34 0 来自 中国
UIBezierPath,Quartz2D,OpenGLES

1. UIBezierPath

1 UIBezierPath

  • 使用UIBezierPath可以创建基于矢量的路径,此类是Core Graphics框架关于路径的封装。使用此类可以界说简朴的外形,如椭圆、矩形大概有多个直线和曲线段构成的外形等。

  • UIBezierPath是CGPathRef数据范例的封装。假如是基于矢量外形的路径,都用直线和曲线去创建。我们使用直线段去创建矩形和多边形,使用曲线去创建圆弧(arc)、圆大概其他复杂的曲线外形。
使用UIBezierPath画图步调:


  • 创建一个UIBezierPath对象
  • 调用-moveToPoint:设置初始线段的起点
  • 添加直线线大概曲线 addQuadCurveToPoint:
  • 绘制drawRect:  对每个UIBezierPath举行绘制
1. 初始化 UIBezierPath 并添加属性 self.beziPath = [[UIBezierPath alloc] init];2. 画板的View里实现touchesBegan,touchesMoved,touchesEnded的署理方法3. 每次操纵都是对 UIBezierPath的绘制 - (void)touchesBeganNSSet *)touches withEventUIEvent *)event{    UITouch *touch = [touches anyObject];    CGPoint currentPoint = [touch locationInView:self];    // 创建一个UIBezierPath    self.beziPath = [[DCBeizierPath alloc] init];    self.beziPath.lineColor = self.lineColor;    self.beziPath.isErase = self.isErase;    self.beziPath.lineJoinStyle = kCGLineJoinRound;    self.beziPath.lineCapStyle = kCGLineCapRound;    // 设置起始点    [self.beziPath moveToPoint:currentPoint];    // 将path添加到数组    [self.beziPathArrM addObject:self.beziPath];}- (void)touchesMovedNSSet<UITouch *> *)touches withEventUIEvent *)event{    UITouch *touch = [touches anyObject];    // 获取移动的点    CGPoint currentPoint = [touch locationInView:self];    CGPoint previousPoint = [touch previousLocationInView:self];    CGPoint midP = midPoint(previousPoint,currentPoint);    //1.将点添加到path内里,2.举行绘制    [self.beziPath addQuadCurveToPoint:currentPoint controlPoint:midP];    [self setNeedsDisplay];}- (void)touchesEndedNSSet<UITouch *> *)touches withEventUIEvent *)event{    UITouch *touch = [touches anyObject];    // 获取竣事点    CGPoint currentPoint = [touch locationInView:self];    CGPoint previousPoint = [touch previousLocationInView:self];    CGPoint midP = midPoint(previousPoint,currentPoint);    //1.将点添加到path内里,2.举行绘制    [self.beziPath addQuadCurveToPoint:currentPoint controlPoint:midP];    [self setNeedsDisplay];}// 盘算中心点CGPoint midPoint(CGPoint p1, CGPoint p2){    return CGPointMake((p1.x + p2.x) * 0.5, (p1.y + p2.y) * 0.5);}4. 举行绘制 - (void)drawRectCGRect)rect{    // 绘制每一条path    if(self.beziPathArrM.count){        for (DCBeizierPath *path  in self.beziPathArrM) {                        if (path.isErase) {                // 橡皮擦                [[UIColor clearColor] setStroke];                path.lineWidth = kEraseLineWidth;                [path strokeWithBlendMode:kCGBlendModeCopy alpha:1.0];            } else {                // 画线                [path.lineColor setStroke];                path.lineWidth = kLineWidth;                [path strokeWithBlendMode:kCGBlendModeNormal alpha:1.0];            }            [path stroke];        }    }        [super drawRect:rect];}总结


  • 优点
    1.这种实现方式最简朴 而且也是直接调用oc的API实现,方法实现较为简朴。
    2.用已知存储的点 添加路径 再绘制,速率很快。
  • 缺点
    1.假如须要保持每条你画的线都在,你须要生存每一条绘画路径。
    2.每次在绘画新添加的绘画线条的时间,都要把这条线段之前全部的线段在重绘一次,浪费体系性能。
    3.假如你不在乎这点性能的浪费,那么另有问题,当你越画线段越多的时间 屏幕辨认点的间隔会越来越大,而且显着能感觉到绘画速率变慢 渐渐能看到之火线段绘画的轨迹。
  • 应用场景
    1.一次性画一些简朴的线段,而且不做修改的环境下可以使用。
    2.UI上须要做一些结果的简朴线段可以使用。
    3.须要频仍修改和绘画的环境下,不发起使用。
2. Quartz2D

在画线的时间,方法的内部默认创建一个path。它把路径都放到了path内里去。
1.创建路径  CGMutablePathRef 调用该方法相当于创建了一个路径,这个路径用来生存画图信息。 > 2.把画图信息添加到路径里边。 从前的方法是点的位置添加到ctx(图形上下文信息)中,ctx 默认会在内  部创建一个path用来生存画图信息。在图形上下文中有一块存储空间专门用来存储画图信息,实在这块空间就是CGMutablePathRef。
3.把路径添加到上下文中。
1.画笔的初始化设置-(void)setup{    self.multipleTouchEnabled = YES;    self.lineWidth = 5;    self.lineColor =[UIColor blackColor];}2. touches署理方法-(void)touchesBeganNSSet *)touches withEventUIEvent *)event{    UITouch *touch = [touches anyObject];        self.previousPoint1 = [touch locationInView:self];    self.previousPoint2 = [touch locationInView:self];    self.currentPoint = [touch locationInView:self];        CGPoint mid1    = midPoint1(self.previousPoint1, self.previousPoint2);    CGPoint mid2    = midPoint1(self.currentPoint, self.previousPoint1);       CGMutablePathRef path = CGPathCreateMutable();    CGPathMoveToPoint(path, NULL, mid1.x, mid1.y);    CGPathAddQuadCurveToPoint(path, NULL, self.previousPoint1.x, self.previousPoint1.y, mid2.x, mid2.y);        CGRect bounds = CGPathGetBoundingBox(path);    CGPathRelease(path);    [self setNeedsDisplayInRect:bounds];}-(void)touchesMovedNSSet *)touches withEvent:(UIEvent *)event{    UITouch *touch  = [touches anyObject];        self.previousPoint2  = self.previousPoint1;    self.previousPoint1  = [touch previousLocationInView:self];    self.currentPoint    = [touch locationInView:self];        CGPoint mid1    = midPoint1(self.previousPoint1, self.previousPoint2);    CGPoint mid2    = midPoint1(self.currentPoint, self.previousPoint1);        CGMutablePathRef path = CGPathCreateMutable();    CGPathMoveToPoint(path, NULL, mid1.x, mid1.x);    CGPathAddQuadCurveToPoint(path, NULL, self.previousPoint1.x, self.previousPoint1.y, mid2.x, mid2.y);        CGRect bounds = CGPathGetBoundingBox(path);    CGPathRelease(path);    [self setNeedsDisplayInRect:bounds];  }-(void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event{    UITouch *touch  = [touches anyObject];     self.previousPoint2  = self.previousPoint1;    self.previousPoint1  = [touch previousLocationInView:self];    self.currentPoint    = [touch locationInView:self];        CGPoint mid1    = midPoint1(self.previousPoint1, self.previousPoint2);    CGPoint mid2    = midPoint1(self.currentPoint, self.previousPoint1);        CGMutablePathRef path = CGPathCreateMutable();    CGPathMoveToPoint(path, NULL, mid1.x, mid1.y);    CGPathAddQuadCurveToPoint(path, NULL, self.previousPoint1.x, self.previousPoint1.y, mid2.x, mid2.y);        //绘画    CGRect bounds = CGPathGetBoundingBox(path);    CGPathRelease(path);    [self setNeedsDisplayInRect:bounds];}3. 绘制- (void)drawRect:(CGRect)rect{    CGPoint mid1 = midPoint1(self.previousPoint1, self.previousPoint2);    CGPoint mid2 = midPoint1(self.currentPoint, self.previousPoint1);        self.context = UIGraphicsGetCurrentContext();    CGContextMoveToPoint(self.context, mid1.x, mid1.y);    // 添加画点    CGContextAddQuadCurveToPoint(self.context, self.previousPoint1.x, self.previousPoint1.y, mid2.x, mid2.y);    // 设置圆角    CGContextSetLineCap(self.context, kCGLineCapRound);    // 设置画笔颜色    CGContextSetLineWidth(self.context, self.isErase? kEraseLineWidth:kLineWidth);        CGContextSetStrokeColorWithColor(self.context, self.isErase?[UIColor clearColor].CGColor:self.lineColor.CGColor);        CGContextSetLineJoin(self.context, kCGLineJoinRound);    // 根据是否是橡皮擦设置设置画笔模式    CGContextSetBlendMode(self.context, self.isErase ? kCGBlendModeDestinationIn:kCGBlendModeNormal);    CGContextStrokePath(self.context);    [super drawRect:rect];}总结


  • 优点
    1.绘画方法较为底层实现服从更高更快。
    2.每次绘画都很流通 不会有延迟感 不会重绘已经画好的绘画路径。
    3.线段更加圆润
  • 缺点
    1.假如有已知点聚集,重绘全部点路径 会斲丧很长时间才气画完。
    2.假如App斲丧性能过多的话<我们的App起着一个视频会话,
    一个通讯会话另有一些很多控件的交互>,在Pad3上绘画 会有断点
    <ad2,mini2 3,Air都没有这个问题<iPhone还没测试过4s和5>>,
    缘故原由大概在于:Pad3是Retina屏幕 分辨率增长一倍
    但是Pad3的CPU GPU比Pad2却只增长了50%左右,
    导致不能一连辨认到屏幕的触点,从而导致出现断点。
  • 使用场景
    1.App不太斲丧性能的环境下且不须要重绘的环境下可以使用。
    2.只在一个页面绘画且绘画后不须要重绘的。
    3.不care这点时间斲丧的。
三、OpenGLES

1.须要添加OpenGLES.framework体系库。并导入头文件
#import <OpenGLES/EAGL.h>#import <OpenGLES/ES2/gl.h>#import <OpenGLES/ES2/glext.h>#import <GLKit/GLKit.h>2.导入设置文件
#import "shaderUtil.h"#import "fileUtil.h"#import "debug.h" 1.png 3.这个半透明的图片很紧张 相当于笔触 通过他的透明度来控制渲染笔画颜色的深浅

// 创建一个纹理的图像- (textureInfo_t)textureFromName:(NSString *)name{    CGImageRef      brushImage;    CGContextRef    brushContext;    GLubyte         *brushData;    size_t          width, height;    GLuint          texId;    textureInfo_t   texture;        // First create a UIImage object from the data in a image file, and then extract the Core Graphics image    brushImage = [UIImage imageNamed:name].CGImage;        // Get the width and height of the image    width = CGImageGetWidth(brushImage);    height = CGImageGetHeight(brushImage);        // Make sure the image exists    if(brushImage) {        // Allocate  memory needed for the bitmap context        brushData = (GLubyte *) calloc(width * height * 4, sizeof(GLubyte));        // Use  the bitmatp creation function provided by the Core Graphics framework.        brushContext = CGBitmapContextCreate(brushData, width, height, 8, width * 4, CGImageGetColorSpace(brushImage), kCGImageAlphaPremultipliedLast);        // After you create the context, you can draw the  image to the context.        CGContextDrawImage(brushContext, CGRectMake(0.0, 0.0, (CGFloat)width, (CGFloat)height), brushImage);        // You don't need the context at this point, so you need to release it to avoid memory leaks.        CGContextRelease(brushContext);        // Use OpenGL ES to generate a name for the texture.        // //创建渲染缓冲管线        glGenTextures(1, &texId);        // Bind the texture name.        //绑定渲染缓冲管线        glBindTexture(GL_TEXTURE_2D, texId);        // Set the texture parameters to use a minifying filter and a linear filer (weighted average)        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);        // Specify a 2D texture image, providing the a pointer to the image data in memory        glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, (int)width, (int)height, 0, GL_RGBA, GL_UNSIGNED_BYTE, brushData);        // Release  the image data; it's no longer needed        free(brushData);                texture.id = texId;        texture.width = (int)width;        texture.height = (int)height;    }        return texture;}// 初始化GL- (BOOL)initGL{    // Generate IDs for a framebuffer object and a color renderbuffer    ////创建帧缓冲管线    glGenFramebuffers(1, &viewFramebuffer);    //绑定渲染缓冲管线    glGenRenderbuffers(1, &viewRenderbuffer);        //绑定帧缓冲管线    glBindFramebuffer(GL_FRAMEBUFFER, viewFramebuffer);            //将渲染缓冲区附加到帧缓冲区上    glBindRenderbuffer(GL_RENDERBUFFER, viewRenderbuffer);    // This call associates the storage for the current render buffer with the EAGLDrawable (our CAEAGLLayer)    // allowing us to draw into a buffer that will later be rendered to screen wherever the layer is (which corresponds with our view).    [context renderbufferStorage:GL_RENDERBUFFER fromDrawable:(id<EAGLDrawable>)self.layer];    glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, viewRenderbuffer);        glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &backingWidth);    glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &backingHeight);        // For this sample, we do not need a depth buffer. If you do, this is how you can create one and attach it to the framebuffer:    //    glGenRenderbuffers(1, &depthRenderbuffer);    //    glBindRenderbuffer(GL_RENDERBUFFER, depthRenderbuffer);    //    glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, backingWidth, backingHeight);    //    glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthRenderbuffer);        if(glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)    {        NSLog(@"failed to make complete framebuffer object %x", glCheckFramebufferStatus(GL_FRAMEBUFFER));        return NO;    }        //创建表现地域    glViewport(0, 0, backingWidth, backingHeight);        // Create a Vertex Buffer Object to hold our data    glGenBuffers(1, &vboId);        // Load the brush texture    // 设置笔头    brushTexture = [self textureFromName"article"];        // Load shaders    [self setupShaders];        // Enable blending and set a blending function appropriate for premultiplied alpha pixel data    glEnable(GL_BLEND);    glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);        return YES;}- (BOOL)resizeFromLayer:(CAEAGLLayer *)layer{   // Allocate color buffer backing based on the current layer size   glBindRenderbuffer(GL_RENDERBUFFER, viewRenderbuffer);   [context renderbufferStorage:GL_RENDERBUFFER fromDrawable:layer];   glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &backingWidth);   glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &backingHeight);      // For this sample, we do not need a depth buffer. If you do, this is how you can allocate depth buffer backing:   //    glBindRenderbuffer(GL_RENDERBUFFER, depthRenderbuffer);   //    glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, backingWidth, backingHeight);   //    glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthRenderbuffer);      if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)   {       NSLog(@"Failed to make complete framebuffer objectz %x", glCheckFramebufferStatus(GL_FRAMEBUFFER));       return NO;   }      // Update projection matrix   GLKMatrix4 projectionMatrix = GLKMatrix4MakeOrtho(0, backingWidth, 0, backingHeight, -1, 1);   GLKMatrix4 modelViewMatrix = GLKMatrix4Identity; // this sample uses a constant identity modelView matrix   GLKMatrix4 MVPMatrix = GLKMatrix4Multiply(projectionMatrix, modelViewMatrix);      glUseProgram(program[PROGRAM_POINT].id);   glUniformMatrix4fv(program[PROGRAM_POINT].uniform[UNIFORM_MVP], 1, GL_FALSE, MVPMatrix.m);      // Update viewport   glViewport(0, 0, backingWidth, backingHeight);      return YES;}- (void)setupShaders{   for (int i = 0; i < NUM_PROGRAMS; i++)   {       char *vsrc = readFile(pathForResource(program.vert));       char *fsrc = readFile(pathForResource(program.frag));       GLsizei attribCt = 0;       GLchar *attribUsed[NUM_ATTRIBS];       GLint attrib[NUM_ATTRIBS];       GLchar *attribName[NUM_ATTRIBS] = {           "inVertex",       };       const GLchar *uniformName[NUM_UNIFORMS] = {           "MVP", "pointSize", "vertexColor", "texture",       };              // auto-assign known attribs       for (int j = 0; j < NUM_ATTRIBS; j++)       {           if (strstr(vsrc, attribName[j]))           {               attrib[attribCt] = j;               attribUsed[attribCt++] = attribName[j];           }       }              glueCreateProgram(vsrc, fsrc,                         attribCt, (const GLchar **)&attribUsed[0], attrib,                         NUM_UNIFORMS, &uniformName[0], program.uniform,                         &program.id);       free(vsrc);       free(fsrc);              // Set constant/initalize uniforms       if (i == PROGRAM_POINT)       {           glUseProgram(program[PROGRAM_POINT].id);                      // the brush texture will be bound to texture unit 0           glUniform1i(program[PROGRAM_POINT].uniform[UNIFORM_TEXTURE], 0);                      // viewing matrices           GLKMatrix4 projectionMatrix = GLKMatrix4MakeOrtho(0, backingWidth, 0, backingHeight, -1, 1);           GLKMatrix4 modelViewMatrix = GLKMatrix4Identity; // this sample uses a constant identity modelView matrix           GLKMatrix4 MVPMatrix = GLKMatrix4Multiply(projectionMatrix, modelViewMatrix);                      glUniformMatrix4fv(program[PROGRAM_POINT].uniform[UNIFORM_MVP], 1, GL_FALSE, MVPMatrix.m);                      // point size           glUniform1f(program[PROGRAM_POINT].uniform[UNIFORM_POINT_SIZE], brushTexture.width / kBrushScale);                      // initialize brush color           glUniform4fv(program[PROGRAM_POINT].uniform[UNIFORM_VERTEX_COLOR], 1, brushColor);       }   }      glError();}// 根据两点画线的方法- (void)renderLineFromPoint:(CGPoint)start toPoint:(CGPoint)end{    //     NSLog(@"drawLineWithPoints--%@--%@",NSStringFromCGPoint(start),NSStringFromCGPoint(end));    static GLfloat*     vertexBuffer = NULL;    static NSUInteger   vertexMax = 64;    NSUInteger          vertexCount = 0,    count,    i;        [EAGLContext setCurrentContext:context];    glBindFramebuffer(GL_FRAMEBUFFER, viewFramebuffer);        // Convert locations from Points to Pixels    CGFloat scale = self.contentScaleFactor;        start.x *= scale;    start.y *= scale;    end.x *= scale;    end.y *= scale;        // Allocate vertex array buffer    if(vertexBuffer == NULL)        vertexBuffer = malloc(vertexMax * 2 * sizeof(GLfloat));        // Add points to the buffer so there are drawing points every X pixels    count = MAX(ceilf(sqrtf((end.x - start.x) * (end.x - start.x) + (end.y - start.y) * (end.y - start.y)) / kBrushPixelStep), 1);    for(i = 0; i < count; ++i) {        if(vertexCount == vertexMax) {            vertexMax = 2 * vertexMax;            vertexBuffer = realloc(vertexBuffer, vertexMax * 2 * sizeof(GLfloat));        }                vertexBuffer[2 * vertexCount + 0] = start.x + (end.x - start.x) * ((GLfloat)i / (GLfloat)count);        vertexBuffer[2 * vertexCount + 1] = start.y + (end.y - start.y) * ((GLfloat)i / (GLfloat)count);        vertexCount += 1;    }        // Load data to the Vertex Buffer Object    glBindBuffer(GL_ARRAY_BUFFER, vboId);    glBufferData(GL_ARRAY_BUFFER, vertexCount*2*sizeof(GLfloat), vertexBuffer, GL_STATIC_DRAW);            glEnableVertexAttribArray(ATTRIB_VERTEX);    glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, GL_FALSE, 0, 0);        // Draw    glUseProgram(program[PROGRAM_POINT].id);        // 画线    glDrawArrays(GL_POINTS, 0, (int)vertexCount);        // Display the buffer    glBindRenderbuffer(GL_RENDERBUFFER, viewRenderbuffer);    [context presentRenderbuffer:GL_RENDERBUFFER];}// 清晰- (void)clearDrawImageView{    [EAGLContext setCurrentContext:context];        // Clear the buffer    glBindFramebuffer(GL_FRAMEBUFFER, viewFramebuffer);    glClearColor(0.0, 0.0, 0.0, 0.0);    glClear(GL_COLOR_BUFFER_BIT);        // Display the buffer    glBindRenderbuffer(GL_RENDERBUFFER, viewRenderbuffer);    [context presentRenderbuffer:GL_RENDERBUFFER];}- (void)setBrushColorWithRed:(CGFloat)red green:(CGFloat)green blue:(CGFloat)blue alpha:(CGFloat)alpha{    // Update the brush color    brushColor[0] = red ;    brushColor[1] = green ;    brushColor[2] = blue ;    brushColor[3] = alpha;        if (initialized) {        glUseProgram(program[PROGRAM_POINT].id);        // 设置画笔颜色        glUniform4fv(program[PROGRAM_POINT].uniform[UNIFORM_VERTEX_COLOR], 1, brushColor);    }}// Releases resources when they are not longer needed.- (void)dealloc{    // Destroy framebuffers and renderbuffers    if (viewFramebuffer) {        glDeleteFramebuffers(1, &viewFramebuffer);        viewFramebuffer = 0;    }    if (viewRenderbuffer) {        glDeleteRenderbuffers(1, &viewRenderbuffer);        viewRenderbuffer = 0;    }    if (depthRenderbuffer)    {        glDeleteRenderbuffers(1, &depthRenderbuffer);        depthRenderbuffer = 0;    }    // texture    if (brushTexture.id) {        glDeleteTextures(1, &brushTexture.id);        brushTexture.id = 0;    }    // vbo    if (vboId) {        glDeleteBuffers(1, &vboId);        vboId = 0;    }        // tear down context    if ([EAGLContext currentContext] == context)        [EAGLContext setCurrentContext:nil];}总结


  • 优点
    1.很底层,绘画速率更快,直接通过硬件的渲染,办理了上一个在iPad3硬件下绘画会有断点的bug。
    2.性能更好。
  • 缺点
    1.暂时我还没找到 画弧线的方法。
    2.更底层,API可读性太差 没有解释 根本看不懂 有解释的也没看懂几个。
    3.通过已知点,重绘的速率也慢,幸亏于相对于上一种方法的慢他是可以看到绘画轨迹的,大概实用于一些特别的需求。
  • 个人集成后遇到的坑
    1.橡皮擦和画笔状态切换的时间回造成状态失效,缘故原由不详...办理方案:每次touchBegain的时间都再次设置一次。
    2.橡皮擦状态下,擦除画笔的时间会有一个小圆点不绝跟随字迹,缘故原由不详...办理方案同上。
  • 应用场景
    1.一次性画一些简朴的线段,而且不做修改的环境下可以使用。
    2.UI上须要做一些结果的简朴线段可以使用。
    3.须要频仍修改和绘画的环境下,不发起使用。
您需要登录后才可以回帖 登录 | 立即注册

Powered by CangBaoKu v1.0 小黑屋藏宝库It社区( 冀ICP备14008649号 )

GMT+8, 2024-11-24 08:09, Processed in 0.206690 second(s), 36 queries.© 2003-2025 cbk Team.

快速回复 返回顶部 返回列表