Can you be specific of the context you are interested in?
On modern machines, a pixel is just a set of three numbers indicating how much red, blue and green light should be shown at a particular point. Ex: {0.75 red, 0.0 blue, 0.5 green} for a kinda-dark, orange pixel. The GPU keeps a big 2D grid of these number-triples in memory and on a regular schedule sends out a copy over the DVI cable to your monitor. The monitor has a bit of memory to hold it's copy. And, it has hardware to scan over the grid of numbers to produce a sequence of voltage levels that are used to change the color of the points on the LCD.
There's a bit of math involved in how to do a good job representing colors with numbers and how to convert those numbers to voltages. But, at the most basic level, an image is just a big 2D grid of numbers. If you want to change the image, poke the grid. People want to change images a whole lot. So, we've developed pretty sophisticated hardware and software around poking 2D grid... But, that's a whole other topic.
On modern machines, a pixel is just a set of three numbers indicating how much red, blue and green light should be shown at a particular point. Ex: {0.75 red, 0.0 blue, 0.5 green} for a kinda-dark, orange pixel. The GPU keeps a big 2D grid of these number-triples in memory and on a regular schedule sends out a copy over the DVI cable to your monitor. The monitor has a bit of memory to hold it's copy. And, it has hardware to scan over the grid of numbers to produce a sequence of voltage levels that are used to change the color of the points on the LCD.
There's a bit of math involved in how to do a good job representing colors with numbers and how to convert those numbers to voltages. But, at the most basic level, an image is just a big 2D grid of numbers. If you want to change the image, poke the grid. People want to change images a whole lot. So, we've developed pretty sophisticated hardware and software around poking 2D grid... But, that's a whole other topic.