Skip Navigation
Pro technology
Technology вчера
JS Required
[JS Required] The Locknet: How China Controls Its Internet and Why It Matters
Читать далее
Pro technology
Technology вчера
JS Required
[JS Required] The Past, Present, and Future of Police Body Cameras

Artificial intelligence (AI) is reshaping the criminal justice system. Law enforcement agencies are using it to predict crime, expedite response, and streamline routine tasks. One of the most promising applications can be found in body camera programs, where AI is transforming unmanageable archives of footage into active sources of insight.

AI can now analyze hundreds of hours of video in seconds. Early pilot programs suggest that these video-reviewing tools, when guided by human oversight, can uncover critical evidence that might otherwise be overlooked, reduce pretrial bottlenecks, and identify potential instances of officer misconduct. But these benefits come with risks. Absent clear guardrails, the same technologies could drift toward government overreach, blurring the line between public safety and state surveillance.

The line between public security and state surveillance lies not in technology, but in the policies that govern it. To responsibly harness AI and mitigate these risks, we recommend that agencies and policymakers:

  • Establish and enforce clear use policies. Statewide rules for body camera use and AI governance ensure consistency across jurisdictions, particularly in areas like body camera activation, evidence sharing, and public disclosure.
  • Pair technology with human oversight. AI should enhance—not replace—human decision-making. Final judgments must rest with trained personnel, supported by independent policy oversight from civilian review boards.
  • Safeguard civil liberties. Safeguards must be in place to protect individual rights, limit surveillance overreach, and ensure data transparency. For example, limiting facial recognition during constitutionally protected activities like protests will help ensure AI is aligned with democratic ideals.

With the right guardrails in place, AI can elevate body cameras from after-action archival tools to always-on intelligence tools, informing decisions in the moment, when it matters most.

Читать далее
Pro technology
Technology 29 июнь
JS Required
[JS Required] How Performant are LLM Agents(AI Chatbots) on Real World Work Tasks? They Fail 70% or More of The Time.

Source.

Читать далее
Pro technology
Technology 25 июнь
JS Required
[JS Required] Boeing’s Inadequate ‘Training, Guidance and Oversight’ Led to Mid-Exit Door Plug Blowout on Passenger Jet

​FAA cited for ineffective oversight of Boeing’s known recordkeeping issues

WASHINGTON (June 24, 2025) — The National Transportation Safety Board Tuesday said the probable cause of last year’s in-flight mid-exit door (MED) plug blowout on a Boeing 737 MAX 9 was Boeing’s failure to “provide adequate training, guidance and oversight” to its factory workers.

The NTSB also found the Federal Aviation Administration was ineffective in ensuring Boeing addressed “repetitive and systemic” nonconformance issues associated with its parts removal process.

The NTSB also concluded that in the two years before the accident, Boeing’s voluntary safety management system, or SMS, was inadequate, lacked formal FAA oversight, and did not proactively identify and mitigate risks. The investigation found that accurate and ongoing data about overall safety culture is necessary for an SMS to be successfully integrated into a quality management system.

On Jan. 5, 2024, the Boeing 737-9, operated as Alaska Airlines flight 1282, was climbing through 14,830 feet about six minutes after takeoff from Portland, Oregon, when the left MED plug departed the airplane. During the rapid depressurization, some passengers’ belongings were sucked out of the airplane, oxygen masks dropped from the overhead passenger service units, and the door to the flight deck swung open, injuring a flight attendant. In addition to the flight attendant, seven passengers received minor injuries. The two pilots, the other three flight attendants and the remaining 164 passengers were uninjured. The flight was destined for Ontario, California.

“The safety deficiencies that led to this accident should have been evident to Boeing and to the FAA — should have been preventable,” NTSB Chairwoman Jennifer Homendy said. “This time, it was missing bolts securing the MED plug. But the same safety deficiencies that led to this accident could just as easily have led to other manufacturing quality escapes and, perhaps, other accidents.”

The MED plug was found in a Portland neighborhood two days after the accident. When investigators examined the recovered plug, they found evidence that the four bolts needed to secure the plug were missing before the accident occurred. Without the bolts, NTSB investigators found the unsecured plug “had moved incrementally upward during previous flight cycles” until it departed the airplane during the accident flight.

The airplane had been delivered to Alaska Airlines three months earlier. Investigators determined that the door plug was opened without the required documentation in Boeing’s Renton, Washington, factory on Sept. 18, 2023, to perform rivet repair work on the fuselage. The door plug was closed the following day. While Boeing’s procedures called for specific technicians to open or close MED plugs, none of the specialized workers were working at the time the door plug was closed. The absence of proper documentation of the door plug work meant no quality assurance inspection of the plug closure occurred.

The investigation also highlighted the need for additional training on flight crew oxygen masks and their communication systems and the need for greater voluntary use of child restraint systems by caregivers of those under two years of age.

The NTSB issued new safety recommendations to the FAA and Boeing. Previously issued recommendations were reiterated to the FAA, Airlines for America, the National Air Carrier Association and Regional Airline Association.

The executive summary of the report, including the findings, probable cause and safety recommendations, is available online​. Additional material, including the preliminary report, previously issued safety recommendations, news releases, the public docket, investigative updates and links to photos and videos, is available on the accident investigation webpage.

The final report will be published in the coming weeks on NTSB.gov.

Читать далее
Pro technology
Technology 18 июнь
JS Required
[JS Required] The OpenAI Files Document Broken Promises, Safety Compromises, Conflicts of Interest, and Leadership Concerns
www.openaifiles.org The OpenAI Files

The OpenAI Files is the most comprehensive collection to date of documented concerns with governance practices, leadership integrity, and organizational culture at OpenAI.

The OpenAI Files

Major Areas of Concern:

::: spoiler Restructuring: Analysis of planned changes to the nonprofit's relationship with its for-profit subsidiary

  • OpenAI plans to remove limits on investor returns: OpenAI once capped investor profits at a maximum of 100x to ensure that, if the company succeeds in building AI capable of automating all human labor, the proceeds would go to humanity. They have now announced plans to remove that cap.
  • OpenAI portrays itself as preserving nonprofit control while potentially disempowering the nonprofit: OpenAI claims to have reversed course on a decision to abandon nonprofit control, but the details suggest that the nonprofit’s board would no longer have all the authority it would need to hold OpenAI accountable to its mission.
  • Investors pressured OpenAI to make structural changes: OpenAI has admitted that it is making these changes to appease investors who have made their funding conditional on structural reforms, including allowing unlimited returns—exactly the type of investor influence OpenAI’s original structure was designed to prevent. :::

::: spoiler CEO Integrity: Concerns regarding leadership practices and misleading representations from OpenAI CEO Sam Altman

  • Senior employees have attempted to remove Altman at each of the three major companies he has run: Senior employees at Altman’s first startup twice urged the board to remove him as CEO over “deceptive and chaotic” behavior, while at Y Combinator, he was forced out and accused of absenteeism and prioritizing personal enrichment.
  • Altman claimed ignorance of a scheme to coerce employees into ultra-restrictive NDAs: However, he signed documents giving OpenAI the authority to revoke employees’ vested equity if they didn’t sign the NDAs.
  • Altman repeatedly lied to board members: For example, Altman stated that the legal team had approved a safety process exemption when they had not, and he reported that one board member wanted another board member removed when that was not the case. :::

::: spoiler Transparency & Safety: Concerns regarding safety processes, transparency, and organizational culture at OpenAI

  • OpenAI coerced employees into signing highly restrictive NDAs threatening their vested equity: Former OpenAI employees faced highly restrictive non-disclosure and non-disparagement agreements that threatened the loss of all vested equity if they ever criticized the company, even after resigning.
  • OpenAI has rushed safety evaluation processes: OpenAI rushed safety evaluations of its AI models to meet product deadlines and significantly cut the time and resources dedicated to safety testing.
  • OpenAI insiders described a culture of recklessness and secrecy: OpenAI employees have accused the company of not living up to its commitments and systematically discouraging employees from raising concerns. :::

::: spoiler Conflicts of Interest: Documenting potential conflicts of interest of OpenAI board members

  • OpenAI’s nonprofit board has multiple seemingly unaddressed conflicts of interest: While OpenAI defines ‘independent’ directors as those without OpenAI equity, the board appears to overlook conflicts from members' external investments in companies that benefit from OpenAI partnerships.
  • CEO Sam Altman downplayed his financial interest in OpenAI: Despite once claiming to have no personal financial interest in OpenAI, much of Altman’s $1.6 billion net worth is spread across investments in OpenAI partners including Retro Biosciences and Rewind AI, which stand to benefit from the company’s continued growth.
  • No recusals announced for critical restructuring decision: Despite these conflicts, OpenAI has not announced any board recusals for the critical decision of whether they will restructure and remove profit caps, unlocking billions of dollars in new investment. :::
Читать далее
Top This Month