Wednesday, November 26, 2025

Online QR code Generator - paste your link to get scan-able QR

 Paste your link to get Scan-able QR below:

Tuesday, November 25, 2025

What was it that I described?

I want a practical software system to be deployed in India that is considerate of resources, and is deployed in government institutions to be able to catch instances of corruption. I want land record, court cases and other such data to be mined and in the context of what I described be able to improve governance.

 Fantastic — this is a concrete, high-impact brief: a practical system you can propose/deploy inside Indian government institutions to mine land records, court cases and related datasets, capture failures/“lost” data, and surface likely corruption. Below I give a full, deployable design: architecture, data sources, algorithms, privacy & legal notes, operational plan, and risks — all tuned for Indian settings and constrained resources.

I’ll start with the short overview, then the architecture and components, then deployment & governance, then costs/metrics/risks. Key India-specific facts cited (land records programmes, court portals, and data-privacy & RTI context) so you can use them in proposals. (dolr.gov.in)


Short overview (one line)

Build an Epistemic Anti-Corruption Platform (EACP) — an immutable, provenance-aware analytics substrate that ingests land records (DILRMP/ULPIN, state Bhulekh portals), court judgments (eCourts), registrations, tax & registry data, and alert-generation modules that flag suspicious patterns, while preserving all intermediate states and failure traces so future modules/vendors can learn and plug gaps. (dolr.gov.in)


1) Core goals & design principles

  • Preserve everything: immutable append-only store of inputs, intermediate transforms, and error/failure artifacts (the “Epistemic Store”).

  • Explainability & provenance: every alert must show the data lineage and exact steps that created it.

  • Vendor-neutral plugability: modules can be hot-swapped; new vendor module sees full failure traces.

  • Resource-aware: first-class support for low-bandwidth, low-CPU environments (edge aggregation, batching).

  • Legal & privacy-first: design around India’s DPDP / data-protection rules and RTI norms. (ey.com)


2) High-level architecture (stack)

  1. Data Ingest Layer (connectors)

    • Connectors for: state Bhulekh/DILRMP exports, ULPIN mappings, eCourts APIs, land registry/registration offices, municipal tax, property tax, and citizen complaints. Use adapters for CSV/PDF ingest and APIs. (dolr.gov.in)

  2. Immutable Epistemic Store (E-Store) (core innovation)

    • Append-only object store (chunked files + metadata).

    • Store: raw input, parsed records, canonical entity snapshots, transformation events, validation failures, and user overrides.

    • Each write records actor, timestamp, software-version and a cryptographic hash (block-like).

    • Lightweight indexes support queries without deleting history.

  3. Provenance & Versioning Layer

    • Graph-based metadata (who, what, when, why).

    • Versioned entities (land-parcel v1, v2, …), with pointer to source docs and transformation chain.

  4. Canonicalization & Entity Resolution

    • Deduplicate names, owners, parcel IDs across states using fuzzy matching, phonetic codes, spatial joins (survey numbers → ULPIN), and record linkage.

  5. Analytics & Detection Engines (pluggable)

    • Rule engine (policy rules, e.g., suspicious mutation within X days after court order).

    • Statistical anomaly detectors (outlier transfers, sudden ownership clusters).

    • Graph analytics: build owner–parcel–actor graphs, detect dense subgraphs/communities (possible syndicates).

    • Temporal drift detectors: detect improbable version edits (backdating, mass edits).

    • ML modules: supervised models trained on labeled corruption cases; unsupervised (autoencoders, isolation forest) for unknown patterns.

  6. Alerting / Case Generator

    • Alerts become “cases” with full provenance snapshot and link to original docs and failure logs.

    • Triage criteria and severity scoring.

    • Audit trail for every human action on a case (who viewed, suppressed, escalated).

  7. Sandbox & Vendor Plug-in API

    • Secure, containerized runtime for third-party modules (submit ML models or rule-sets).

    • Modules run against “copies” of data slices; results are versioned and stored.

    • New vendor code cannot delete original E-Store records — only append.

  8. Dashboard & Investigator UI

    • Lightweight web UI for public servants: filterable cases, graph visualizer, document viewer (side-by-side), and an explanation pane showing lineage and "why flagged".

  9. Edge Aggregation Nodes

    • Thin nodes deployed at district/state level to pre-validate and compress before syncing to central E-Store to save bandwidth.

  10. Ops & Auditing

    • Immutable logs, role-based access, cryptographic audit (optional blockchain anchoring for court-admissibility).


3) Practical data sources (India-specific)

  • DILRMP / State Bhulekh portals — digitized land records across states (ingest via state exports/CSV/PDF). (dolr.gov.in)

  • ULPIN — unified parcel ID helps cross-walk survey numbers and map parcels. Use ULPIN mapping during canonicalization. (dolr.gov.in)

  • eCourts / NJDG / CNR — case metadata, judgments and orders (public APIs / scraping with care). (services.ecourts.gov.in)

  • Registrar/Stamp duty / Property tax databases — verify transaction times, consideration amounts

  • Citizen complaints, RTI disclosures, gazette notifications — audit and cross-check
    (Where APIs unavailable, use scheduled data pulls and OCR pipelines for scanned documents.)


4) Detection patterns & algorithms (concrete examples)

  • Ownership churn: parcels with many ownership transfers within short time windows → flag for money-laundering/shell flipping. (temporal sliding-window + threshold)

  • Backdated mutations: parcel updated with earlier timestamp than previous state or many edits with same operator → flag. (provenance comparison)

  • Court-order bypass: registrations occurring after court stay orders or before the case was listed → cross-check eCourts timeline vs registry timestamp.

  • Benami signatures: owner names that match PEP lists, or owner address correspondence with known shell addresses. (entity resolution + third-party watchlists)

  • Graph fraud cycles: detect small group of actors repeatedly transferring parcels among themselves — dense subgraph detection / community detection.

  • Valuation mismatch: declared sale price far below average market value in region for similar parcels → tax evasion suspicion.

  • OCR / NLP anomalies: inconsistent wording across mutation documents; suspicious templated edits. (NLP + document similarity score)

Each alert includes a provenance bundle: the exact inputs, transformation steps, and failure logs that produced the alert.


5) Epistemic failure capture & vendor handover (how to enable replacement modules)

  • All failures recorded: parsing errors, missing fields, uncertain linkages, low-confidence matches, and operator overrides are saved as first-class records in E-Store.

  • Module contract: any detection module must publish metadata: what inputs it used, confidence, version, and failure reasons.

  • Handover flow: when Program A fails to process an event (e.g., low-confidence resolution), the system marks those events as “pending/expert review” and exposes them to third-party vendors via a controlled sandbox API with synthetic or redacted data. Vendors can submit candidate solutions that are evaluated and, once validated, promoted to production.

  • Audit & rollback: new modules append their outputs; previous state remains immutable — easy rollback and explainability.


6) Privacy, legal & governance (must-haves)

  • Law & policy: design to comply with India’s Digital Personal Data Protection regime and RTI obligations. Personal data should be processed only for legitimate public interest, with DPIA (Data Protection Impact Assessment). (ey.com)

  • Data minimization & role-based access: investigators get access on a need-to-know basis; anonymize/redact for vendor sandboxes.

  • Retention & archival policy: E-Store is immutable but access to personal details can be time-limited per law.

  • Independent oversight: a multi-stakeholder review board (judicial/ombudsman) to review flagged cases pre-escalation.

  • RTI & transparency: publish non-sensitive system metrics and anonymized outcomes per RTI norms while protecting ongoing investigations. (legalaffairs.gov.in)


7) Deployment plan & resource constraints (practical for India)

Phased rollout to limit resource needs:

Phase 0 — Pilot (3–6 months):

  • Pick one district/state with digitized Bhulekh and cooperative registry + eCourts access.

  • Deploy single edge node + central E-Store hosted on government cloud (or co-located).

  • Focus on 2–3 key detectors (ownership churn, court-order bypass, valuation mismatch).

  • Train investigators, produce 50–100 verified alerts to refine thresholds.

Phase 1 — Scale (6–18 months):

  • Add more states, enable ULPIN mapping, integrate registrar and property-tax feeds.

  • Add vendor plugin program (sandbox).

  • Improve ML models with labeled cases.

Phase 2 — Nationwide (18–36 months):

  • Central E-Store replicated across zones, full analytics suite, and public dashboards (non-sensitive).

Resource notes:

  • Use containerized microservices (Kubernetes) with autoscaling.

  • Heavy ML training offloaded to central GPU nodes; inference runs on cheaper CPU nodes.

  • Use compression and deduplication to reduce storage (parcels compress well).

  • Edge nodes handle OCR + pre-filtering to cut network traffic.


8) Operational & human factors

  • Investigator UX: quick triage, one-click “provenance view”, and ability to attach case notes and RTI requests.

  • Training: train revenue officers, judicial clerks and anti-corruption bureaus on interpretation and false-positive mitigation.

  • Feedback loop: human labels feed ML retraining; all corrections logged in E-Store.


9) Risks, limitations & mitigations

  • Data quality (scans, inconsistent numbering): mitigate with robust OCR, human-in-the-loop validation, and ULPIN alignment.

  • False positives: set conservative thresholds, and require human validation for escalations.

  • Political misuse risk: governance board + audit logs + legal oversight to prevent weaponization.

  • Vendor trust: redaction & sandboxing, strict SLAs, and contractual audit rights.

  • Legal/regulatory changes: implement modular privacy layer so compliance rules can be updated without re-engineering core.


10) Success metrics (KPIs)

  • Reduction in time-to-detect suspicious land fraud (baseline → target)

  • % of flagged cases that lead to verified investigation / action (precision)

  • Mean time from ingest → alert (latency)

  • Volume of previously undiscovered patterns discovered by vendor modules

  • Number of successful vendor plug-ins validated from failure corpus


11) Rough cost & staffing (very rough, for proposal)

  • Initial pilot: ₹50–150 lakh (infrastructure, engineering, OCR licenses, one-year ops)

  • Staff: 1 PO, 2 data engineers, 2 backend devs, 1 frontend, 1 ML engineer, 2 investigators/subject experts (pilot)

  • Scaling adds more infra and 2–3x staffing by state.

(These numbers are ballpark; adjust for cloud credits, reuse of state infra, and open-source tooling.)


12) Next concrete deliverables I can produce for you (pick any)

  1. Detailed data model and schemas for the Epistemic Store + sample JSON records.

  2. Sample detection rules & pseudocode (e.g., ownership churn detector).

  3. A one-page proposal / executive summary for a state government with legal annex (DPDP & RTI compliance).

  4. A 6–12 month pilot project plan with milestones and resource list.

  5. Example API spec for vendor sandbox & plug-in contracts.

Which of these shall I produce now? (I’ll pick one and deliver the full artifact right away.)

What was the prompt or trail of prompts for this?

 Yes — the idea you’re describing is not only possible, it is the logical next step once you have:

  • Immutable historical state

  • Captured failures and discarded data

  • Transparent computation logs

  • A system that never truly loses information

Once such a substrate exists, something powerful becomes possible:


Programs can fail gracefully and hand over execution to another program — with full context preserved.

This is called a self-healing, replaceable, evolvable computation environment.

Let’s break down how this works conceptually and practically.


🧩 1. Program A fails → but all failure context is preserved

In your envisioned architecture:

When Program A:

  • fails

  • times out

  • produces wrong outputs

  • cannot handle a specific class of events

…it does not crash silently.

Instead it emits:

Failure Object

Contains:

  • Inputs that triggered the failure

  • Internal states at the time

  • What paths were executed

  • Error traces

  • What mathematically did not converge

  • Environmental conditions

  • Time spent and resources consumed

  • What it attempted but couldn’t do

This is exactly like a biological system keeping memories of pain or failure.

Nothing is lost.


🧩 2. Program B (from a different vendor or model) can later be plugged in

Because all of the above context is saved in the Epistemic Store, a new program (Program B) can join later and:

It immediately sees:

  • Every historical failure

  • Every unhandled event

  • Every piece of data Program A discarded

  • Every pattern that A could not process

  • Examples of failures to generalize

  • Where performance fell below expectations

This creates a continuous learning ecosystem.


🧩 3. Program B fixes the problem Program A missed

You get automated backward compatibility and forward evolutionary improvement.

Instead of:

  • rewriting software from scratch

  • hoping the new version handles corner cases

  • spending months on bug triage

  • rediscovering old issues

Program B reads Program A’s failure traces and adjusts:

  • A machine-learning model could train from A’s failures.

  • Another vendor could implement missing cases.

  • A domain-specific tool could solve only the known failure types.

  • A special-purpose solver could pick up the “hard modes” A left.

This makes software evolvable across vendors, versions, and architectures.


🧩 4. Runtime Decision: Who should handle a new event?

At runtime, the system can:

Route future events based on past failures

If a new event looks similar to:

  • Failure #17

  • Or unhandled case #260

  • Or divergence pattern #431

…the manager hands the event to Program B, not A.

This is exactly like:

  • biological modular redundancy

  • fault-tolerant aircraft control systems

  • distributed AI agents with fallback logic

  • neural circuits where alternate pathways fire when primary ones fail


🧩 5. Over time, you get an ecosystem of cooperating programs

Instead of a single brittle software system, you get a computational ecosystem:

  • Program A handles the general cases

  • Program B handles corner cases A historically failed

  • Program C handles rare situations both A and B historically could not solve

  • Program D is experimental and learns from all past mistakes

This resembles:

  • Ensemble learning

  • Mixture-of-experts

  • Evolutionary computation

  • Biological compensation (e.g., neural plasticity after injury)


🧩 6. Vendor neutrality becomes possible

You asked:

“…could another vendor develop software later and, once deployed, handle the problem that got missed?”

Yes — beautifully.
Because:

  • Everything Program A ever did wrong is stored.

  • The new vendor does not need the source code of A.

  • They only need the epistemic trace of failures.

  • Their Program B can be completely different in architecture.

  • The system knows automatically which program to use for which scenario.

This is incredibly valuable in industries like:

  • healthcare diagnostics

  • air traffic control

  • autonomous driving

  • large-scale fintech

  • national ID or public distribution systems

  • critical infrastructure

where no single program can ever be perfect.


🧩 7. This leads to a new paradigm: “Competitive Collaboration in Software”

Instead of:

  • a monolithic program doing everything

  • humans writing patches endlessly

  • vendors blaming each other

You get:

A marketplace of modules competing to handle events more reliably than others,
and the system automatically promotes whichever program produces the fewest failures.

This mirrors:

  • biological evolution

  • competitive learning in AI

  • swarm intelligence

  • multi-agent reinforcement learning


Conclusion

Yes — if data, failures, and discarded computations are all preserved:

Software can hand off functions it cannot handle to other software.

Future programs—by any vendor—can plug in and solve what previous versions missed.

The whole system becomes self-healing, adaptive, and continuously improving.

This is a fundamentally new direction for computing — far beyond conventional binary logic.


If you want next, I can describe:

🔹 A concrete architecture for this multi-program evolutionary ecosystem
🔹 How the manager decides which program handles which event
🔹 Data schemas for storing failures and context
🔹 How different vendors’ modules collaborate and compete
🔹 A real-world example (e.g., PDS, Aadhaar, UPI, autonomous cars, medical diagnosis)

Which one should I explain next?

Wednesday, November 19, 2025

Website for Contextual Jokes

 Since I lack skills to make websites end-to-end.. I would rather settle for discussing ideas.. here's one more

How can I make a website that takes as input a situation as narrated by a person and cracks a contextual joke?

Below is a complete, practical blueprint for building a website that takes a narrated situation from a user and generates a contextual joke. I’ve broken it into architecture, models, prompting, backend, frontend, and deployment.


1. System Architecture

User Flow

  1. User types (or records) a small narration of a situation.

  2. Backend sends the text to an AI model with a joke-generation prompt.

  3. Model returns a contextual joke, respecting tone and safety.

  4. UI displays the joke.

High-Level Architecture

Frontend (React/HTML) → Backend (FastAPI/Node) → LLM API (OpenAI / Llama / local model)

2. Choosing the Model

Best options:

  • OpenAI GPT models (most consistent humor)

  • Llama 3 / Mistral (via Together, Ollama, Groq)

  • Local small model like Gemma-2–9B + fine-tuning (for offline use)

Humor is a subtle task, so good prompting is often better than fine-tuning.


3. The Key Component: A Good Prompt

You want a stable, instruction-style system prompt:

System Prompt

You generate contextual jokes based on situations narrated by users.

Rules:
- Be witty but not offensive.
- Joke must reflect the exact situation.
- Keep the punchline under 2 sentences.
- No dark humor, sensitive topics, or personal insults.
- If the narration is unclear, ask for a fun clarification.

User Prompt Template

Here is the user's situation:
"{user_situation}"

Give a short contextual joke about it.

This gives reproducible, safe humor.


4. Backend Implementation

Below is a minimal FastAPI (Python) backend example using OpenAI’s API:

server.py

from fastapi import FastAPI
from pydantic import BaseModel
from openai import OpenAI

client = OpenAI()
app = FastAPI()

class Situation(BaseModel):
    text: str

SYSTEM_PROMPT = """
You generate contextual jokes based on situations narrated by users.
Be witty but not offensive. Keep jokes under 2 sentences.
"""

@app.post("/joke")
def generate_joke(situation: Situation):
    response = client.chat.completions.create(
        model="gpt-4.1",
        messages=[
            {"role": "system", "content": SYSTEM_PROMPT},
            {"role": "user", "content": f"User situation: {situation.text}"}
        ]
    )
    return {"joke": response.choices[0].message["content"]}

Run:

uvicorn server:app --reload

5. Frontend Implementation

Very simple HTML/JS page (works in Blogger or any static host):

<div style="max-width:500px;margin:auto;">
  <h2>Contextual Joke Generator</h2>

  <textarea id="situation" rows="5" placeholder="Describe your situation..."></textarea>
  <br><br>
  <button onclick="getJoke()">Crack a joke</button>

  <h3>Joke:</h3>
  <div id="output" style="font-size:1.2em; font-weight:bold;"></div>
</div>

<script>
async function getJoke() {
    const text = document.getElementById("situation").value;

    const res = await fetch("http://localhost:8000/joke", {
        method: "POST",
        headers: { "Content-Type": "application/json" },
        body: JSON.stringify({ text })
    });

    const data = await res.json();
    document.getElementById("output").innerText = data.joke;
}
</script>

You can replace the backend URL when deploying (Render, Railway, Vercel, etc).


6. Safeguards / Filtering

You MUST protect against:

  • Abuse or illegal content

  • Depressing personal situations

  • Harassment

  • Sensitive tragedies

Use a filter before generating jokes:

if "death" in situation.text.lower() or "accident" in situation.text.lower():
    return {"joke": "This seems serious — let's keep things respectful. Want a light-hearted joke instead?"}

Or use LLM classifier models (“moderation”) if needed.


7. Optional Advanced Features

1. Speech Input

Use Web Speech API:

var recognition = new webkitSpeechRecognition();

2. Joke Style Selector

  • Sarcastic

  • Dad joke

  • Office humor

  • Tech humor

  • PG-13 / clean

3. Personalized humor profile

User picks their humor preferences → stored → used in prompt.


8. Deployment Options

Free / Simple

  • Frontend: Github Pages / Netlify

  • Backend: Render's free tier

Fully serverless

Use Cloudflare Workers + AI (Claude, OpenAI through Cloudflare)

Offline

Run Llama/Gemma via Ollama on local machine or cheap server.


If you want, I can generate:

✔ A complete ready-to-host Blogger-compatible version
✔ A full React frontend
✔ A one-click Vercel deployment
✔ A Node.js/Express backend version
✔ A funny-joke model fine-tuning dataset

Just tell me which version you prefer!

Feature Flag orchestration guide

If I am maker of a website or app and want certain features killed in a specific build that is going to be deployed to apps in several million devices.. can you explain at the code level how specific features can be enabled or disabled on a per device basis and how all this can be orchestrated?

Nice — this is exactly the sort of thing you want nailed before a multi-million device rollout. Below I’ll give a practical, code-level guide you can implement: concepts, architecture, safety knobs, and concrete code snippets (server and client) that show how to enable/disable features on a per-device basis and how to orchestrate rollouts.

1) Core idea — feature flags / remote config

  • Keep feature control data (flags, rules, percentages, targeting) separate from app binary.

  • Apps ask a centralized service (or a CDN cache) for the flag state for a given principal (device id / user id / session).

  • Evaluate flags server-side when possible (most secure) or client-side for responsiveness (with signed config + local cache).

  • Provide kill switches (global off), percentage rollouts (gradual), targeted rollouts (device model, country), and experiment buckets (A/B).

2) Basic components

  1. Flag store / control plane — where product/ops define flags and targeting rules (DB + UI).

  2. Evaluation service / SDK — evaluates which variant a given device should get. Can be:

    • Server-side evaluation API (recommended for sensitive control).

    • Lightweight client SDK with local cache + occasional sync.

  3. Delivery — CDN + TLS + signing for cached configs; use shorter TTLs for rapid kill.

  4. Telemetry / metrics — track flag exposures and success/failure metrics.

  5. Safety — global kill switch, audit logs, incremental rollout, circuit-breakers.

3) Flag model (simple)

Each flag entry:

{
  "key": "new_payment_flow",
  "default": false,
  "rules": [
    { "type": "user_id_allow", "ids": ["123","456"] },          // explicit allow
    { "type": "device_model", "values": ["Pixel5","iPhone12"] },
    { "type": "country", "values": ["IN","PK"] },
    { "type": "percentage", "percent": 20, "salt": "newpay-v1" } // 20% rollout
  ],
  "created_by": "pm@company.com",
  "created_at": "2025-11-01T10:00:00Z",
  "kill_switch": false
}

4) Deterministic bucketing (important)

To do percentage rollouts that are sticky per device, compute a deterministic hash of (salt + device_id) and map to 0–99. Devices with value < percent are in the cohort. Example function (JS):

// simple stable bucket: returns 0..99
function stableBucket(deviceId, salt = "") {
  // djb2-like hash computed digit-by-digit (deterministic)
  let h = 5381;
  const s = salt + "|" + deviceId;
  for (let i = 0; i < s.length; i++) {
    h = ((h << 5) + h) + s.charCodeAt(i); // h * 33 + c
    h = h & 0xffffffff; // keep 32-bit
  }
  // unsigned
  return Math.abs(h) % 100;
}

// usage:
if (stableBucket("device-abcdef", "newpay-v1") < 20) {
  // in 20% rollout
}

5) Example: server-side evaluation service (Node + Redis)

A small HTTP endpoint that evaluates flags for a device; caches rules in Redis for speed.

// server/evaluator.js (Node/Express)
const express = require('express');
const redis = require('redis');
const bodyParser = require('body-parser');
const app = express();
app.use(bodyParser.json());
const client = redis.createClient();

function stableBucket(deviceId, salt="") {
  let h = 5381;
  const s = salt + "|" + deviceId;
  for (let i = 0; i < s.length; i++) {
    h = ((h << 5) + h) + s.charCodeAt(i);
    h &= 0xffffffff;
  }
  return Math.abs(h) % 100;
}

// load flags from Redis (or DB) -- assume cached JSON at "flags:all"
async function getFlags() {
  return new Promise((res, rej) => {
    client.get('flags:all', (err, data) => {
      if (err) return rej(err);
      res(data ? JSON.parse(data) : {});
    });
  });
}

app.post('/evaluate', async (req, res) => {
  const { device_id, user_id, country, device_model } = req.body;
  const flags = await getFlags();
  const result = {};
  for (const f of Object.values(flags)) {
    if (f.kill_switch) { result[f.key] = false; continue; }
    let on = f.default;
    // rules short-circuit: if any allow rule hit -> set true; explicit deny rules could be supported too
    for (const rule of f.rules || []) {
      if (rule.type === 'user_id_allow' && rule.ids?.includes(user_id)) { on = true; break; }
      if (rule.type === 'device_model' && rule.values?.includes(device_model)) { on = true; break; }
      if (rule.type === 'country' && rule.values?.includes(country)) { on = true; break; }
      if (rule.type === 'percentage') {
        if (stableBucket(device_id, rule.salt || f.key) < (rule.percent || 0)) { on = true; break; }
      }
    }
    result[f.key] = on;
  }
  res.json({ flags: result, server_time: new Date().toISOString() });
});

app.listen(3000);

6) Example: client SDK (JS) with local cache & fallback

Client requests /evaluate on startup and caches result. If offline, use last cached flags + safe defaults. Use short TTL for critical flags.

// client/flagClient.js
async function fetchFlags(deviceInfo) {
  try {
    const r = await fetch('https://flags.example.com/evaluate', {
      method: 'POST',
      body: JSON.stringify(deviceInfo),
      headers: {'content-type':'application/json'}
    });
    const json = await r.json();
    localStorage.setItem('flags:cached', JSON.stringify({ts:Date.now(), payload: json}));
    return json.flags;
  } catch (e) {
    // offline or network failure: use cache
    const cached = JSON.parse(localStorage.getItem('flags:cached') || 'null');
    if (cached) return cached.payload.flags;
    // final fallback: empty
    return {};
  }
}

// usage in app
(async () => {
  const deviceInfo = { device_id: DEVICE_ID, user_id: USER_ID, country: 'IN', device_model: 'Pixel5' };
  const flags = await fetchFlags(deviceInfo);
  if (flags['new_payment_flow']) {
    startNewPaymentFlow();
  } else {
    startLegacyPayment();
  }
})();

Security note: if a flag unlocks a sensitive server path, the server must authorize (server-side) — don't rely purely on client flags.

7) Orchestration & rollout strategies

  • Gradual % rollout: start at 0 → 1% → 5% → 25% → 100%. Use deterministic bucketing so devices stay in same bucket across updates.

  • Canary by cohort: route a percentage of traffic or specific devices (internal QA devices) to the new feature.

  • Geo / device targeting: limit to certain countries or device models.

  • User segment: power users, paid users, etc.

  • Time-based rules: enable on a date/time window.

  • Kill switch: global boolean that can be toggled to instantly disable feature everywhere. Put kill_switch evaluation before rules.

8) Telemetry and safety

  • Emit an exposure event whenever the client or server evaluates a flag: {timestamp, flag_key, device_id_hash, variant, context}. Use hashed device id to preserve privacy.

  • Track errors and KPIs (error rate, latency, crash rate) by flag exposure. Ramp back if errors rise.

  • Automated alerting based on metric thresholds.

  • Keep audit trail of who changed flags + when.

9) Performance & caching

  • Keep flag config small and cache on CDN/edge. TTL tradeoffs:

    • Short TTL (e.g., 30s–1min): quick kill but more load.

    • Long TTL (10m–1h): less load but slower response to kills.

  • Use push (WebSocket/FCM/APNs) to notify clients of critical flag changes (e.g., kill switch) so they fetch immediately.

10) Data schema changes & migrations

  • Use flags to gate new database schema changes (e.g., feature off uses old schema). Migrate progressively:

    1. Add optional columns.

    2. Run backend to write both old+new data under a gate.

    3. Flip flag for backends and clients to read new schema.

    4. Remove old code after full rollout.

11) Rollback & incident playbook

  • Have a single global kill switch that bypasses rules (immediately off).

  • Maintain a team runbook: how to flip kill switch (UI + API + direct DB update), who authorizes, communication plan.

12) Example: signed client config (prevent tampering)

If you do client evaluation, sign the flag bundle with server private key and verify in the client:

// server: produce signed config
// pseudo:
const payload = JSON.stringify({flags: {...}, exp: Date.now()+60000});
const signature = HMAC_SHA256(serverSecret, payload);
return { payload, signature };

// client:
if (verifyHMAC(serverSecretPublic, resp.payload, resp.signature)) {
  useFlags(JSON.parse(resp.payload).flags);
} else {
  // ignore, use fallback
}

(Use proper crypto libs; never embed private keys in the client.)

13) Multi-platform notes

  • Mobile (iOS/Android): use SDK that caches flags, receives push notifications for immediate refresh, uses deterministic device id (or hashed advertising id) for bucketing. Keep TTL small for kill switches.

  • Web: fetch flags on boot, store in cookie/localStorage; consider server-side rendering to ensure search engines and first render obey server evaluation.

  • Backend: do sensitive checks server-side (e.g., enable payment route only if server says so).

14) Danger / anti-patterns to avoid

  • Shipping logic for enabling sensitive features purely to the client (easy to tamper). Always enforce critical checks on server.

  • Using non-deterministic random for rollouts (users get different experiences every request). Use deterministic hashing.

  • Long TTLs for critical flags (makes kill slow).

  • No telemetry or metrics — you must measure.

15) Example: quick Python evaluator (for integrations)

# evaluator.py
import hashlib, json, time
FLAGS = {} # load from DB/redis

def stable_bucket(device_id: str, salt: str="") -> int:
    s = (salt + "|" + device_id).encode('utf-8')
    h = int(hashlib.sha1(s).hexdigest()[:8], 16)
    return h % 100

def eval_flag(flag, ctx):
    if flag.get('kill_switch'): return False
    on = flag.get('default', False)
    for rule in flag.get('rules', []):
        t = rule['type']
        if t == 'user_id_allow' and ctx.get('user_id') in rule.get('ids', []):
            return True
        if t == 'device_model' and ctx.get('device_model') in rule.get('values', []):
            return True
        if t == 'country' and ctx.get('country') in rule.get('values', []):
            return True
        if t == 'percentage':
            if stable_bucket(ctx.get('device_id',''), rule.get('salt','')) < rule.get('percent',0):
                return True
    return on

16) Orchestration & workflow (who does what)

  • Product configures flag intent and rollout plan.

  • SRE/Release sets kill switch and rollout gates in the control plane.

  • Engineers instrument exposure events and gate code paths.

  • Data/Analytics monitor KPI changes and recommend ramp up/down.

  • Legal/Privacy ensure device ids are hashed and PII handled correctly.

17) Checklist before rollout to millions

  • Flag control UI + API (audit logs + RBAC).

  • Deterministic bucketing implemented.

  • Server enforcement for sensitive paths.

  • Client caching + fallback behavior defined.

  • Telemetry for exposures + metrics instrumented.

  • Kill switch tested (flip and observe).

  • Test groups (QA, internal) pinned.

  • Rollout plan (percent steps + criteria to proceed).

  • Alerts for spike in errors/crashes.


If you want, I can:

  • Produce a small, ready-to-deploy reference repo layout (server + client) in your preferred language/framework.

  • Draft the rollout playbook and monitoring queries you should add to Prometheus/Grafana (or Datadog).

  • Give a short, copyable policy text for RACI/approval for flips and kills.

Which of those would be most useful next?

Tuesday, August 19, 2025

Kubernetes pods to run microservices that are launched from user generated blog

 Q: In my blogger.com blog I want users to be able to upload pics and create a collage, but I can't have a hosted server do this.. I want Kubernetes pods to be launched when any service request comes, the collage images are sent to the pods and result can be downloaded by the user. Is this possible?

Yes, it is possible to implement the functionality you described using Kubernetes and serverless principles. Here’s an outline of how you can design such a system:


Architecture Overview

  1. Frontend (Blogger Blog)

    • Your Blogger site acts as the frontend where users upload images and request the collage.

    • JavaScript on the blog can handle image uploads and send them to a backend API.

  2. Backend API Gateway

    • Use a cloud-hosted API gateway (e.g., AWS API Gateway, Google Cloud Endpoints) to accept requests.

    • The API gateway triggers the creation of a Kubernetes pod for processing.

  3. Kubernetes Cluster

    • Kubernetes is configured to launch pods on demand.

    • Each pod contains a containerized application that processes the collage request.

  4. Collage Service (Inside Pods)

    • This service receives uploaded images, processes the collage, and stores the result in cloud storage (e.g., AWS S3, Google Cloud Storage).

    • A signed download URL is sent back to the user for retrieving the collage.

  5. Cloud Storage

    • Temporary storage for user-uploaded images and the final collage.

    • Ensures scalability and reduces dependency on persistent storage within pods.


Steps to Implement

1. Blogger Integration

  • Add a custom HTML/JavaScript form to your Blogger blog.

  • Use JavaScript to handle file uploads and communicate with the API gateway.

  • Example (AJAX):

    const formData = new FormData();
    formData.append("image1", file1);
    formData.append("image2", file2);
    
    fetch('https://your-api-gateway-url/upload', {
        method: 'POST',
        body: formData
    })
    .then(response => response.json())
    .then(data => {
        console.log('Download link:', data.download_url);
        // Show the link to the user
    });
    

2. API Gateway

  • Choose a cloud provider for the API gateway:

    • AWS: API Gateway with AWS Lambda to trigger Kubernetes jobs.

    • Google Cloud: Cloud Endpoints or Cloud Functions.

    • Azure: Azure API Management.


3. Kubernetes Pod Auto-Scaling

  • Set up Kubernetes pods with:

    • A containerized application that processes images (e.g., using Python’s PIL library).

    • Horizontal Pod Autoscaler (HPA) to launch new pods based on demand.

  • Use Kubernetes Jobs to run processing tasks.

Example Deployment (YAML):
apiVersion: batch/v1
kind: Job
metadata:
  name: collage-job
spec:
  template:
    spec:
      containers:
      - name: collage-processor
        image: your-collage-processor-image
        env:
        - name: STORAGE_BUCKET
          value: "your-bucket-name"
      restartPolicy: Never
  backoffLimit: 4

4. Collage Service

  • Use a lightweight library like:

    • Python: PIL (Pillow), OpenCV.

    • Node.js: Sharp, Jimp.

  • The service should:

    • Receive image files.

    • Process the images into a collage.

    • Upload the result to cloud storage.


5. Cloud Storage Integration

  • Store images and the final collage in a cloud bucket.

  • Generate pre-signed URLs for:

    • Uploading images to the pod.

    • Downloading the collage.


Challenges and Solutions

  1. Latency:

    • Cold starts of Kubernetes pods can add latency.

    • Use pre-warmed pods or a managed serverless container service (e.g., AWS Fargate, Google Cloud Run).

  2. Security:

    • Authenticate requests using API keys or OAuth.

    • Validate and sanitize user inputs to prevent abuse.

  3. Cost Management:

    • Set limits on pod resource usage and configure auto-scaling policies.

    • Implement a cleanup process for unused files.


This setup avoids the need for a traditional hosted server while leveraging Kubernetes for scalability and cloud storage for persistence. If you’d like, I can help you with detailed YAML configurations or code for specific parts!


Q: Thanks! considering that my blog hardly gets 100 unique visitors in a day, a couple would be interested in making a collage, how much would making one collage cost?

The cost of processing a single collage depends on several factors, including the cloud provider, the configuration of your Kubernetes cluster, and the resources allocated for each pod. Since your blog has low traffic and only a few users would use the collage feature, the costs can be kept minimal by using a pay-as-you-go model. Here’s a breakdown:


Cost Components

  1. Cloud Provider Costs
    You’ll primarily pay for:

    • Compute (CPU & Memory): For the time your pod is running.

    • Storage: For temporary storage of images and collages.

    • API Gateway: For handling requests.

    • Data Transfer: For uploading and downloading images.

  2. Kubernetes Costs

    • If you’re using a managed Kubernetes service (like GKE, EKS, or AKS), you’ll incur costs for:

      • The control plane (often free for low usage in some tiers).

      • The worker nodes, charged by the resources used.

  3. Processing Requirements
    For a simple collage generator:

    • Assume 0.25 vCPU and 256MB RAM for a single pod.

    • Average processing time: ~10-30 seconds per collage.

  4. Storage Costs

    • Temporary storage of uploaded images: Minimal (~1-10MB per collage).

    • Output storage for collages: ~1-2MB per collage.


Estimated Costs per Collage

Component Details Estimated Cost
Compute 0.25 vCPU, 256MB RAM for 30 seconds (~0.0001 hours). ~$0.0000025 per collage (AWS/GCP).
API Gateway One request to trigger the job. ~$0.0004 per request.
Storage (Temporary) 10MB upload + 2MB result stored for <1 day. ~$0.0002 per collage.
Data Transfer 12MB total transfer (upload/download). ~$0.00012 per collage.

Total Estimated Cost per Collage: $0.00072 ($0.001).


Monthly Cost for Your Traffic

Assume:

  • 5 collages/day (max traffic).

  • 30 days/month.

Monthly Cost:

5×30×0.001=0.15USD/month5 \times 30 \times 0.001 = 0.15 \, \text{USD/month}


Cost Optimization Tips

  1. Use Preemptible/Spot Instances:
    If the pod takes only a few seconds, spot instances can reduce costs significantly.

  2. Free Tiers:
    Many cloud providers offer free-tier usage:

    • Google Cloud: 50GB storage free/month, 1 vCPU/month for GKE.

    • AWS: 1 million API Gateway requests and 750 hours of t2.micro free for 12 months.

  3. Minimize Cold Starts:
    Use small and efficient Docker images to reduce startup times.

  4. Batch Processing:
    Queue requests and process multiple collages together to maximize resource usage.


Given your low traffic and minimal resource requirements, you can likely run this setup for less than $1/month in total. It's an extremely cost-effective solution! If you'd like, I can help analyze costs for a specific cloud provider you're considering.

Q: Is this kind of architecture called microservices?

Yes, the architecture you're considering has characteristics of microservices, but it also aligns closely with serverless and event-driven architecture principles. Let's break it down to clarify:


What is Microservices Architecture?

Microservices architecture involves designing an application as a collection of small, independent, and loosely coupled services. Each service focuses on a specific business capability and communicates with others via well-defined APIs.

Key Features of Microservices:

  1. Small, Single-Purpose Services: Each service handles a distinct functionality (e.g., image processing, user authentication).

  2. Independence: Services can be developed, deployed, and scaled independently.

  3. API Communication: Services communicate over APIs (usually HTTP or message queues).

  4. Containerization: Often deployed using Docker or Kubernetes.

Your architecture includes some of these elements:

  • The collage generation pod could be considered a "microservice" that performs one task (image processing).

  • Kubernetes enables independent scaling and containerization.

  • The system communicates via APIs (e.g., API Gateway).


How It Also Aligns with Serverless Architecture

Your approach more closely resembles serverless principles:

  1. Event-Driven: A Kubernetes pod (or job) is launched only in response to an event (user request).

  2. Ephemeral Services: Pods exist only for the duration of the task, which is a hallmark of serverless computing.

  3. Cost Efficiency: You only pay for the compute resources used during the task, avoiding idle costs.

This serverless-style deployment is supported by Kubernetes when configured with event-driven triggers like Kubernetes Jobs or integrations with cloud event systems (e.g., Cloud Run, AWS Lambda with EKS).


Comparison: Microservices vs. Serverless

Aspect Microservices Serverless (Your Case)
Granularity Many small services, each running continuously. Short-lived tasks triggered by events.
Deployment Runs on Kubernetes, VMs, or similar. Uses serverless compute (e.g., pods/jobs).
Scaling Scaled independently based on load. Automatically scaled, no idle resources.
Resource Usage Resources allocated even during idle times. Pay-per-use, zero cost when idle.
Communication Requires inter-service communication. Minimal; tasks are isolated.

Your Architecture

Your design doesn't fully implement microservices since you aren't creating multiple interacting services. Instead, it leverages serverless principles on Kubernetes, with event-driven pods acting as disposable "workers" for a specific task.

Terminology:

  • It can be described as event-driven microservices or serverless workloads on Kubernetes.


Would Full Microservices Be Better?

For your use case (low traffic, simple processing task), microservices might be overkill. Serverless-style architecture is simpler and more cost-effective. However, if your system evolves to include multiple independent features (e.g., collage generation, video editing, user management), adopting a true microservices approach might make sense.

Would you like guidance on scaling this system or adapting it to include more features?

Can you give examples of computer techies getting confused by the commands what seem to screw their minds as in a "mind fuck"?

 Certainly. In the programming and tech world, some commands or behaviors can be unintuitive, misleading, or just bizarre enough to make even experienced developers pause and wonder if reality is breaking. Here are some classic and modern examples of “mind-fuck” moments—commands or situations that confuse or trip up even seasoned computer techies:


🔧 1. The rm -rf / command

Context: Unix/Linux
Effect: Deletes everything on the root file system.

sudo rm -rf /

Why it’s confusing/terrifying:
Beginners may not understand the consequence, but seasoned users dread this. It's a complete wipeout of the system without warning if protections are disabled. Once executed, there's often no recovery unless backups exist. The very existence of such a command in the hands of anyone is a recipe for disaster.


🌀 2. JavaScript’s [] == ![]

Context: JavaScript

[] == ![]
// true

Why it melts minds:
It defies intuitive logic. The left side is an empty array, the right side is the negation of an array (which is false), and somehow, [] == false is true. This stems from JavaScript's loose coercion rules and == vs ===.


🧠 3. Git Detachment Madness

Context: Git

git checkout <commit-hash>

Why it confuses people:
You land in a "detached HEAD" state with a cryptic message. Beginners often don’t understand why their changes aren’t showing up in any branch or why nothing seems to commit “right.”


🧨 4. Fork Bombs

Context: Unix/Linux

:(){ :|:& };:

Why it’s trippy:
This single-line shell function causes an infinite loop of process spawning—crippling the system. It looks like emoji soup to the untrained eye and behaves like malware in disguise. A real "WTF did I just run?" moment.


🧙 5. Python’s is vs ==

Context: Python

a = 256
b = 256
print(a is b)  # True

a = 257
b = 257
print(a is b)  # False

Why it's confusing:
Python caches small integers (−5 to 256). So a is b is True for 256 but not 257. Beginners think is checks value equality, but it actually checks identity (same memory reference).


🕳️ 6. C: Buffer Overflow Mayhem

Context: C programming

char buffer[8];
strcpy(buffer, "AAAAAAAAAAAAAAAA");

Why it's a trap:
Beginners don’t realize this overflows into other memory, potentially corrupting execution flow. It’s subtle, works fine in some compilers, crashes in others, and introduces nondeterminism—a classic source of hair-pulling bugs.


🔁 7. Bash Brace Expansion Weirdness

mkdir {a,b,c}

Creates:

a/ b/ c/

But:

echo "I have {no, some} idea"

Outputs:

I have no idea I have some idea

Even weirder:

echo {1..3}
# 1 2 3

Why it trips people up:
It’s powerful but incredibly non-obvious. Works like magic until it doesn’t. And good luck spotting a missing comma or unmatched brace.


💾 8. Visual Studio Code: settings.json vs Workspace Settings

Context: VS Code
Why it messes with people:
You change a setting globally (settings.json) and wonder why it’s not reflected. Turns out, the workspace or folder-specific settings override it. It’s especially disorienting for users new to the layered configuration model.


🔄 9. Docker Volume and Bind Mount Confusion

docker run -v $(pwd):/app myimage

Why it’s confusing:
People expect the image’s /app to have its own files, but they get overwritten by the host’s files because of a bind mount. It leads to the question: “Where did my code go?!”


💥 10. Makefile Tabs vs Spaces

Context: Makefile

all:
    echo "hello"

Why it's evil:
Those must be tabs, not spaces. Use spaces and you get:
make: *** missing separator. Stop.

Mind-breaking moment: Tabs and spaces look identical in most editors. Debugging this becomes an existential crisis.


😵 11. Python: Mutable Default Arguments

def append_to_list(value, my_list=[]):
    my_list.append(value)
    return my_list
append_to_list(1)  # [1]
append_to_list(2)  # [1, 2] <-- WTF?!

Why:
The default value is evaluated once at function definition, not each time the function is called.


🧪 Bonus: Quantum C++ Moment

int a = 10;
if (a = 5) {
    std::cout << "Surprise!" << std::endl;
}

Why it’s messed up:
Instead of checking if a is 5, it assigns 5 to a. Then if (5) is true. Looks totally legit to most people, and it's a frequent interview trap.


Want me to turn this into a presentation, comic strip, or programming poster?


Dodd namskara GPT avare... Idu artha agtilla innu hyperbole add maadi comic strip anthe!!!

Thursday, June 26, 2025

Execute https://wa.me/91xxxxxxxxxx URL in an Android App to Chat with non contact numbers

If you link your google chrome profile with your whatsapp account..

then while in chrome open the URL https://wa.me/91xxxxxxxxxx

the xxx being an unknown mobile number.. You can rest assured that 

you can chat with that person in WhatsApp without storing the number.

The URL will be executed in an Android App.


After you launch the app..
you can type the number.. no extra permissions needed.
Just press ENTER and off you go with your chat.

Here is the repo in case you are interested: Link

Its created in Android Studio using kotlin language.

Loom video: https://www.loom.com/share/762d9d55fa6b420eabbc78cec0ad9bda?sid=c3a1571b-c746-4993-a39d-45ab511501d4

Monday, June 2, 2025

Full Stack Development in MERN Stack

 Hello,

Tumultus ride it has been as always. Its June,2nd 2025 and as even semester of Academic year 2024-25 is reaching its end, my college went for peer review by NBA and results are awaited. Suffice to say we have learnt what it entails to make graduates with all attributes that we claim they possess.

I(https://jyothyit.ac.in/faculty/mr-mukesh-kamath-bola/) was called upon to handle the subject FSD with Mern stack. It was very tough esp when I didn't know MongoDB that well, let alone expressjs or webpack and modularization.

Here is the manual of our lab that was the outcome of the course.  

FSD-BIS-601-MANUAL

Here are some projects that I demonstrated: https://github.com/bmkamath2000/React-Projs

Sunday, April 20, 2025

Dunder / Magic Methods in Python like __init__()

 Tech with Tim in his python video on Dunder methods in python speaks about

1) __init__() method is a constructor                                



class Rect():

    def __init__(self,x,y):

        self.x = x

        self.y = y

        print("Called __init__ with param %s, %s" % (x,y))

Rect(2,3) => calls __init__ method.

2)  + method is same as __add__()

str1 = "hello"                          

str2 = "world"

new_str = str1 + str2

same_str = str1.__add__(str2)

print(new_str)

print(same_str)


3) __length__() is invoked when len(object) is called                  


str1 = "hello"

str2 = "world"

str1_length = len(new_str)

str2_length = len(same_str)

print(str1_length, str2_length)


4)  __add__(self, oper): is invoked on + operator

     __str__(self): is invoked when the object is printed

class Counter():

    def __init__(self):

        self.value = 1

    def count_up(self):

        self.value += 1

    def count_down(self):

        self.value -= 1

    def __str__(self):

        return f"Count=%s"% self.value

    def __add__(self, oper):

        return self.value + oper.value

count1 = Counter()

count2 = Counter()

count1.count_up()

count2.count_up()

print(count1, count2)

print(count1 + count2)

5) __repr__()   - representation

6) __sub__(),    --subtraction

   __mul__(),      -- multiply

  __truediv__()    -- division

7)__lt__()      --Lessthan, 

   __gt__()     --GreaterThan, 

   __eq__(),    -- Equalto    

  __gte__()  -- Greater than equal, 

   __lte__()  -- Less than equal,

   __ne__()  - notequal

8) __getitem__() in lists obj[item] useful for indexing,

   __setitem__() in lists obj[index]= value,

   __delitem__()in lists del obj[index],

I intended to try out all the methods but lack of time is making me post faster

Tuesday, February 25, 2025

Python Decorators

These features of python are not in any order of importance as such.

I chanced upon a youtube video where "the python dude" was explaining decorators.

It seems like in python we can pass function names as parameter.

Also the passed function can be called from the called function.

Hence our code becomes more useful and more difficult to understand.

import time

def measure_time(func):
    def wrapper():
        start = time.time()
        func()
        end = time.time()
        print(f'Execution time : {end - start} sec')
    return wrapper

@measure_time
def morning():
    print("Good morning")

morning()

Output:

pythonprgms>python pythonprgm5.py 

Good morning

Execution time 0.000936 sec

A similar program with two decorator per function morning is possible:

#It happens because the decorator runs the function immediately.
# The wrapper prevents that and ensures it only runs when you call it.

import time

def measure_time(func):
    def wrapper():
        start = time.time()
        func()
        end = time.time()
        print(f'Execution time : {end - start} sec')
    return wrapper

def greeting(func):
    def wrapper():
        print(f'Hello,')
        func()      
    return wrapper


@greeting
@measure_time
def morning():
    print("Good morning")

morning()

Output:
pythonprgms>python pythonprgm6.py Hello, Good morning
Execution time 0.000936 sec

Similar changes post we can pass arguments to morning function but decorators too need to 
pass same arguments.

# *args , **kwargs
import time

def measure_time(func1):
    def wrapper(*args , **kwargs):
        start = time.time()
        func1(*args , **kwargs)
        end = time.time()
        print(f'Execution time : {end - start} sec')
    return wrapper

def greeting(func2):
    def wrapper(*args , **kwargs):
        print(f'Hello,')
        func2(*args , **kwargs)      
    return wrapper


@greeting
@measure_time
def morning(name):
    print("Good morning %s" % name)

morning("John P Diaz")

Output:
pythonprgms>python pythonprgm7.py Hello, Good morning John P Diaz Execution time : 0.0 sec

Friday, January 17, 2025

Python Dictionaries

 A dictionary in python can contain name value pairs. Like

thisdict = {
    "name" : "Shiva",
    "gender" : True,
    "age" : 45,
}

where thisdict contains name, gender and age of string, boolean and int data type.

On purpose I have created another dict with a JSON as a value of a name-value pair.

thisdict = {
    "name" : "Shiva",
    "gender" : True,
    "age" : 45,
    "assets-json" : {
        "net-worth": 1000000,
        "house-type": "flat",
        "address": ["J P Nagar 5th Ph",
                    "Bangalore",
                    "560034"]
    }
}

I have made use of this dictionary structure in a program which

a. finds the length of the dictionary

b. prints the name, gender and age

c. prints the net-worth of the individual in the dictionary object.


thisdict = {
    "name" : "Shiva",
    "gender" : True,
    "age" : 45,
    "assets-json" : {
        "net-worth": 1000000,
        "house-type": "flat",
        "address": ["J P Nagar 5th Ph",
                    "Bangalore",
                    "560034"]
    }
}

thatdict = {
    "name" : "Harish",
    "gender" : True,
    "age" : 54,
    "assets-json" : {
        "net-worth": 10000000,
        "house-type": "Individual House",
        "address": ["Jayanagar",
                    "Bangalore",
                    "560045"]
    }
}

def length_of_dict(dict_param):
    return len(dict_param)

def display_dict(dict_param):
    print("this dict is of %s" % dict_param["name"])
    gender = "male" if dict_param["gender"] == True else "female"
    print("Age = %s \nGender= %s" %(dict_param["age"], gender))
    print("Net Worth:%s" % dict_param["assets-json"]["net-worth"])

if __name__=="__main__":
    print("Length of thatdict:",length_of_dict(thatdict))
    print("Length of thisdict:",length_of_dict(thisdict))
    display_dict(thatdict)
    display_dict(thisdict)

Output of the program:

Length of thatdict: 4

Length of thisdict: 4

this dict is of Harish

Age = 54

Gender= male

Net Worth:10000000

this dict is of Shiva

Age = 45

Gender= male

Net Worth:1000000

Wednesday, January 1, 2025

Python Lists

 In Python, a list is a built-in dynamic sized array (automatically grows and shrinks). We can store all types of items (including another list) in a list. A list may contain mixed type of items, this is possible because a list mainly stores references at contiguous locations and actual items maybe stored at different locations.

  • List can contain duplicate items.
  • List in Python are Mutable. Hence, we can modify, replace or delete the items.
  • List are ordered. It maintain the order of elements based on how they are added.
  • Accessing items in List can be done directly using their position (index), starting from 0.

nested lists in python : matrix = [
    [1, 2, 3],
    [4, 5, 6],
    [7, 8, 9]
]

# Access element at row 2, column 3
print(matrix[1][2])

Another program to find pair of numbers in a sorted array that sum to a value(target):

def pair_sum_sorted_all_pairs(arr, target):

  """

  Finds all pairs in a sorted array that sum to a given target.

  Args:

    arr: A sorted array of integers.

    target: The target sum.

  Returns:

    A list of tuples representing the pairs.

  """

  result = []

  left, right = 0, len(arr) - 1

  while left < right:

    current_sum = arr[left] + arr[right]

    if current_sum == target:

      result.append((arr[left], arr[right]))

      left += 1

      right -= 1

    elif current_sum < target:

      left += 1

    else:

      right -= 1


  return result


if __name__=="__main__":

    arr = [1, 2, 3, 4, 5, 6, 7]

    target = 8

    print(pair_sum_sorted_all_pairs(arr, target))

When we run this program here we get following output:


C:\Users\kamat>python untitled123.py

[(1, 7), (2, 6), (3, 5)]