Cloud and Software Architecture, Soft skills, IOT and embedded
Flutter mobile - certificate and SSL exclusions for development domains
Get link
Facebook
X
Pinterest
Email
Other Apps
Flutter mobile development gets twisty when working behind corporate firewalls and with self-signed development certificates. We look at a bit of code that can be used in debug builds that sets the emulator/simulator proxies and disables cert checks.
You can use this sample code wrapped in an `if (isKDebug)` when running Flutter on mobile. It lets you exclude your self-signed certificates for your internal domains while leaving certificate checks enabled for production and external domains. The code also sets the proxy when running in the Android Emulator or the iOS Simulator. Setting the proxy is optional, enabled when you pass in the proxy port.
import'dart:io';
///
/// Ignores certificates in developermode for limited set of domains
/// Simplifies development in corporate self signed environment
/// Configures the emulator or simulator proxy if passed proxy port.
/// Assumes there is a proxy running locally if proxy port provided
/// Proxy only used for mobile devices. We assume browser is already configured.
///
classDevelopmentHttpConfigextendsHttpOverrides {
DevelopmentHttpConfig({
requiredthis.certExclusionDomains,
this.iosProxyHost='127.0.0.1',
this.androidProxyHost='10.0.2.2',
this.proxyPort,
});
/// Proxy used for android
/// Defaults to the 10.0.2.x network
StringandroidProxyHost;
/// Proxy used for IOS because it uses the host network. defaults to 127.0.0.1
/// Android has its own network
StringiosProxyHost;
/// list of domains we exclude from cert check
List<String> certExclusionDomains;
/// proxyPort != null means setup the proxy if isAndroid or isIOS
I've worked in corporate environments where the Android 10.0.2.1 doesn't work for reaching sites and I have to use the corporate proxy instead.
Example usage from test case
Here are two test cases. "wrong.host.badssl.com" is a test site with a bad certificate. We test connecting to it with that domain in our "do not check list" and again with it not in the list.
I do a lot of my development and configuration via ssh into my Raspberry Pi Zero over the RNDIS connection. Some models of the Raspberry PIs can be configured with gadget drivers that let the Raspberry pi emulate different devices when plugged into computers via USB. My favorite gadget is the network profile that makes a Raspberry Pi look like an RNDIS-attached network device. All types of network services travel over an RNDIS device without knowing it is a USB hardware connection. A Raspberry Pi shows up as a Remote NDIS (RNDIS) device when you plug the Pi into a PC or Mac via a USB cable. The gadget in the Windows Device Manager picture shows this RNDIS Gadget connectivity between a Windows machine and a Raspberry Pi. The Problem Windows 11 and Windows 10 no longer auto-installs the RNDIS driver that makes magic happen. Windows recognizes that the Raspberry Pi is some type of generic USB COM device. Manually running W indows Update or Upd...
The Windows Subsystem for Linux operates as a virtual machine that can dynamically grow the amount of RAM to a maximum set at startup time. Microsoft sets a default maximum RAM available to 50% of the physical memory and a swap-space that is 1/4 of the maximum WSL RAM. You can scale those numbers up or down to allocate more or less RAM to the Linux instance. The first drawing shows the default WSL memory and swap space sizing. The images below show a developer machine that is running a dev environment in WSL2 and Docker Desktop. Docker Desktop has two of its own WSL modules that need to be accounted for. You can see that the memory would actually be oversubscribed, 3 x 50% if every VM used its maximum memory. The actual amount of memory used is significantly smaller allowing every piece to fit. Click to Enlarge The second drawing shows the memory allocation on my 64GB laptop. WSL Linux defaul...
This is about running VSCode AI code assist locally replacing Copilot or some other service. You may run local models to guarantee none of your code ends up on external servers. Or, you may not want to maintain an ongoing AI subscription. We are going to use LM Studio and VS Code. This was tested on Windows 11 with an RTX 3060 TI with 8GB of VRAM. 8GB really limits the number and size of the models we can use. LM Studio's simple hosting model of 1 LLM and an embedding works for us in this situation. You want a big card. 8GB is a tiny card. Related blog articles and videos Several related blogs and videos that cover VSCode and local LLMs Blog Get AI code assist VSCode with local LLMs using Ollama and the Continue.dev extension - Mac Get AI code assist VSCode with local LLMs using LM Studio and the Continue.dev extension - Windows Rocking an older Titan RTX 24GB as my local AI Code assist on Windows 11, Ollama and VS Code YouTube Video Using loc...
Comments
Post a Comment