Wandering Thoughts archives

2021-10-18

The cut and paste irritation in "smart" in-browser text editing

It's common for modern websites and browser based applications (such as Grafana and Prometheus) to have some corner of their experience where you enter text. On Twitter or the Fediverse, you may want to write tweets; in Prometheus and Grafana, you may want to enter potentially complex metrics system expressions to evaluate, and so on. HTML has a basic but perfectly functional set of mechanisms for this, in the <textarea> and '<input type="text" ...>' form elements. However, increasingly often websites and apps feel that these are not good enough, and what people really want is a more advanced experience that's more like a full blown local editor.

(Sometimes this means code editing style colours, highlighting of matching brackets, smart indentation, and autocompletion (as in Grafana and Prometheus). Other times this is augmenting the basic job of text entry with additional features, as in Twitter.)

Unfortunately, quite a lot of the time an important feature seems to get left on the cutting room floor in the process of adding this smart text editing. Namely, good support for cut and paste (in the broad version that supports copying text as well). True, native browser cut and paste has a surprisingly large range of features (especially on Unix), but web editors often fumble some or much of these experiences. For instance, on Unix (well, X, I can't speak for Wayland) you can normally highlight some text and then paste it with the middle mouse button. This works fine on normal HTML input objects (because the browsers get it right), but I have seen a wide range of behaviors on both the 'copy' and the 'paste' side with smart text editors. Some text editors only highlight text when you select and you can't copy, especially to other applications; some text editors copy but spray the text with Unicode byte order marks. Sometimes you can't paste in text with the middle mouse button, or at the least it will be misinterpreted.

The slower full scale formal, explicit Copy and Paste operations are more likely to work but aren't always safe in things that claim to be text fields. Even when they work, sometimes Ctrl-C and Ctrl-V are intercepted and you can only perform them through menus. And I've seen systems where text would claim to paste but become mangled on input.

Unfortunately, it's easy for me to imagine how this happens during web development. First off, I believe that the Unix style 'select then paste with the middle mouse button' isn't widely supported outside of Unix, so people not developing on that can easily miss it entirely. General cut and paste is widely available, but it's also a generally unsexy thing that is neither as obviously attractive nor as frequently used as typing and editing text in a text field. Most of the time you write Tweets by hand or type in metric system rule expressions (with possibly autocompletion). Copying them from or to elsewhere is much less common, and less common things garner bugs and get to be lower priority for fixing bugs. I doubt web developers for places actively want to break cut and paste; it just happens by all too easy accident.

(This is partly a grump, because I'm tired of my cut and paste not working or only half working if I remember to it in a way that I rarely use.)

web/WebEditingVsCutAndPaste written at 23:20:46;

Getting some hardware acceleration for video in Firefox 93 on my Linux home desktop

A bit over a year ago, I wrote about my confusion over Firefox 80's potentially hardware accelerated video on Linux. More recently, I had some issues with Firefox's WebRender GPU-based graphics acceleration, which through some luck led to a quite useful and informative Firefox bug. The upshot of all of this, and some recent experimentation, is that I believe I've finally achieved hardware accelerated video (although verifying this was a bit challenging).

My home desktop has an Intel CPU and uses the built in Intel GPU, and I use what is called a non-compositing window manager. In order to get some degree of GPU involvement in video playback in my environment, I need to be using hardware WebRender and generally force Firefox to use EGL instead of GLX with the about:config gfx.x11-egl.force-enabled setting. This results in video playback that the intel_gpu_top monitoring program says is using about as much of the GPU as Chrome is when playing the same video. In the future no setting will be necessary, as Firefox is switching over to EGL by default. This appears to be about as good as I can get right now on my current hardware on the sort of videos that I'm most interested in playing smoothly.

The Arch Linux wiki has a much longer list of steps to get VA-API acceleration. Using those as guidelines and after some spelunking of the Firefox source and strategic use of a $MOZ_LOG value of 'PlatformDecoderModule:5,Dmabuf:5', the settings that leave things not reporting that VA-API is disabled (for my test video) is setting media.ffmpeg.vaapi.enabled to true and media.rdd-process.enabled to false. Based on an inspection of the current Firefox source code, using a RDD process disables VA-API unconditionally. The current debugging output you want to see (from the Dmabuf module) is:

D/Dmabuf nsDMABufDevice::IsDMABufVAAPIEnabled: EGL 1 DMABufEnabled 1  media_ffmpeg_vaapi_enabled 1 CanUseHardwareVideoDecoding 1 !XRE_IsRDDProcess 1

If any of those are zero, you are not going to be using VA-API today (and the platform decoder module logging will report that VA-API is disabled by the platform). Unfortunately, Firefox's about:support doesn't seem to say anything about how it's doing video playback (although it does have a section for audio), so we have to resort to this sort of digging.

(You need the Dmabuf logging to determine just why VA-API is disabled by the platform, and then having PlatformDecoderModule logging is helpful to understand what else is going on.)

However, after doing all of this to enable VA-API, it appears that VA-API doesn't accelerate decoding of my test video; the CPU and GPU usage is basically the same whether or not VA-API is theoretically enabled. Nor does Chrome seem to do any better here.

My tentative conclusion from the log output and intel_gpu_top information is that I'm probably now using the GPU in Firefox to actually display the video on screen, instead of blasting the pixels into place with the CPU, but video decoding is probably still not hardware accelerated. Running vainfo says that VA-API acceleration is available in general, so this may be either a general software issue or simply that the particular video format can't have its decoding hardware accelerated.

web/Firefox93MyVideoAcceleration written at 00:36:33;


Page tools: See As Normal.
Search:
Login: Password:

This dinky wiki is brought to you by the Insane Hackers Guild, Python sub-branch.