ggez: Possible memory leak on to_rgba8

Grabbing a snapshot of canvas seems to create a memory leak over time, as if something internal to the function is not released.

The loss seems to be around 50-60k per second.

We stripped back our use case to the following… update does nothing draw does the following…

    fn draw(&mut self, ctx: &mut Context) -> GameResult {
        self.capture(ctx);
        self.render(ctx);
        graphics::present(ctx);
        Ok(())
    }

We have the following struct…our state which holds a stage… the state maintains the canvas that the stage draws into… We grab a capture of that canvas before allowing it to draw… If we don’t perform the capture function, everything works If we do, we leak memory.

pub struct State {
    pub stage: Stage,
}

impl State {
    pub fn new(ctx: &mut Context) -> State {
        let canvas = Canvas::with_window_size(ctx).unwrap();

        State {            
            canvas,                  // off-screen canvas
            stage: Stage::new(ctx), 
        }
    }
    
    fn capture(&mut self, ctx: &mut Context) {
        &self.stage.capture_stage(ctx, &self.canvas);
    }

    /// renders the off-screen canvas to the on-screen canvas
    fn render(&self, ctx: &mut Context) {
        let params = DrawParam::new().dest([0.0, 0.0]);
        graphics::set_canvas(ctx, None);
        graphics::draw(ctx, &self.canvas, params).unwrap();
    }
}

pub struct Stage {
    capture_video: Arc<RwLock<Vec<u8>>>,
}

impl Stage {
    pub fn new(ctx: &mut Context) -> Stage {
        let canvas = graphics::Canvas::with_window_size(ctx).unwrap();
        let capture_video = Arc::new(RwLock::new(canvas.image().to_rgba8(ctx).unwrap()));

        Stage {
            capture_video,
        }
    }

    pub fn capture_stage(&mut self, ctx: &mut Context, canvas: &graphics::Canvas) {
        let screen = canvas.image().to_rgba8(ctx).unwrap();    // <<<<<<<<<<<<<<<<<<<<<<<<< THIS CAUSES A LEAK

        // let mut bytes = self.capture_video.write().unwrap();
        // *bytes = screen;
        // drop(bytes);

    }
}

This should just work all day long, not really do anything, and screen should drop the used memory.

If we allow the let screen = canvas.image().to_rgba8(ctx).unwrap(); to run without our update or draw event, then memory leaks continuously.

Hardware and Software:

  • ggez version:0.6.rc1
  • OS: Big Sir (mac osx)

This is the current source for the to_rgba8() fn…

   /// Dumps the `Image`'s data to a `Vec` of `u8` RGBA values.
    pub fn to_rgba8(&self, ctx: &mut Context) -> GameResult<Vec<u8>> {
        use gfx::memory::Typed;
        use gfx::traits::FactoryExt;

        let gfx = &mut ctx.gfx_context;
        let w = self.width;
        let h = self.height;

        // Note: In the GFX example, the download buffer is created ahead of time
        // and updated on screen resize events. This may be preferable, but then
        // the buffer also needs to be updated when we switch to/from a canvas.
        // Unsure of the performance impact of creating this as it is needed.
        // Probably okay for now though, since this probably won't be a super
        // common operation.
        let dl_buffer = gfx
            .factory
            .create_download_buffer::<[u8; 4]>(w as usize * h as usize)?;

        let factory = &mut *gfx.factory;
        let mut local_encoder = GlBackendSpec::encoder(factory);

        local_encoder.copy_texture_to_buffer_raw(
            &self.texture_handle,
            None,
            gfx::texture::RawImageInfo {
                xoffset: 0,
                yoffset: 0,
                zoffset: 0,
                width: w as u16,
                height: h as u16,
                depth: 0,
                format: gfx.color_format(),
                mipmap: 0,
            },
            dl_buffer.raw(),
            0,
        )?;
        local_encoder.flush(&mut *gfx.device);

        let reader = gfx.factory.read_mapping(&dl_buffer)?;

        // intermediary buffer to avoid casting.
        let mut data = Vec::with_capacity(self.width as usize * self.height as usize * 4);
        // Assuming OpenGL backend whose typical readback option (glReadPixels) has origin at bottom left.
        // Image formats on the other hand usually deal with top right.
        for y in (0..self.height as usize).rev() {
            data.extend(
                reader
                    .iter()
                    .skip(y * self.width as usize)
                    .take(self.width as usize)
                    .flatten(),
            );
        }
        Ok(data)
    }

Is there anything above that we’re not releasing??

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Comments: 21 (10 by maintainers)

Most upvoted comments

We’ll test it today. You rock.