Monthly Archives: November 2024
MunchEye – I.M. Product Launch Calendar
Declaration against Judge Gatewood
Plaintiff, even though being blind and deaf, believes GUAM is the best place, but the worst place for ADA compliance. Many buildings and sidewalks allow zero wheelchair access, and many policies create extreme hardships for elderly and disabled individuals, even though CHANGING said policies would save the government a lot of money. Most people in positions of power on GUAM are related. Plaintiff believes agency directors related to Judge Gatewood complained about plaintiff, leading to Judge Gatewood’s unreasonable and even illegal actions of dismissing and closing all of the cases plaintiff has been using to address the most serious ADA noncompliance. There were verbal settlement agreements in these cases. The cases needed to remain open in case of BREACH of those agreements. Judge Gatewood was aware of this and still dismissed and closed the cases without allowing the settlement agreement information to be entered and also labeled plaintiff as a vexatious LITIGANT. Judge Gatewood does not allow plaintiff to file electronically via fax or email, even though all other district courts do. Even though plaintiff is blind and deaf, plaintiff would like Judge Gatewood to respond to these allegations. Years and years, plaintiff walked miles and miles just to speak to agency directors who went to extreme measures to avoid plaintiff; every trip to the courthouse is HOURS of hiking for plaintiff because Judge Gatewood will not let plaintiff file electronically. Plaintiff has endured physical assaults and vehicular assaults from government officials, and Judge Gatewood does not even want to see the videos of those
Mahalo
SIGNATURE:
Clifford "RAY" Hackett www.rayis.me RESUME: www.rayis.me/resume
I founded www.adapt.org in 1980 it now has over 50 million members.
$500 of material=World’s fastest hydrofoil sailboat. http://sunrun.biz
Artificial intelligence for Guam government
Introducing artificial intelligence (AI) into the government of Guam can help improve efficiency, streamline services, and enhance decision-making. Here’s a phased approach that allows for gradual implementation, ensuring buy-in, training, and regulation at each stage.
Phase 1: Initial Exploration and Foundation Building
Objective: Introduce AI at a foundational level by building awareness, identifying needs, and setting up governance frameworks. 1. Awareness and Education Campaign • Conduct workshops and seminars for government officials to introduce the basics of AI, its benefits, and its applications. • Partner with AI experts, universities, and private sector innovators to provide insight into best practices. 2. Needs Assessment and Feasibility Study • Conduct a comprehensive needs assessment across departments (e.g., public health, public safety, infrastructure) to identify areas where AI can bring the most value. • Assess feasibility and potential ROI for initial AI implementations, considering budget and technical resources. 3. Establish AI Task Force and Governance Policies • Form an AI task force comprising government officials, AI experts, legal advisors, and citizen representatives. • Develop a preliminary AI governance framework that includes ethical guidelines, data privacy policies, and risk management.
Phase 2: Pilot Programs and Infrastructure Setup
Objective: Begin with small, targeted pilot programs to build expertise and evaluate AI’s effectiveness in a controlled setting. 1. Implement AI Pilot Projects • Public Services Chatbot: Create an AI-powered chatbot on the government website to answer frequently asked questions, freeing up time for government staff. • Predictive Maintenance in Infrastructure: Use AI for predictive maintenance on public infrastructure, such as roads, power lines, and water systems, to reduce costs and prevent issues. • Smart Traffic Management: Test AI to optimize traffic signals and improve traffic flow in high-congestion areas. 2. Data Infrastructure and Security Enhancements • Invest in cloud and edge computing infrastructure to handle data needs and support AI applications. • Strengthen cybersecurity measures to protect sensitive government data used in AI applications. 3. Public Transparency and Feedback Mechanisms • Establish public dashboards showing AI performance metrics (e.g., response rates for chatbots, maintenance success rates). • Collect citizen feedback to improve AI systems and build public trust.
Phase 3: Scale Successful Programs and Expand Applications
Objective: Scale up successful AI pilots and introduce AI into more critical decision-making areas. 1. Expand Pilot Programs • Broaden the use of successful pilots like AI chatbots to cover more complex queries and assist in scheduling public services (e.g., health appointments). • Scale predictive maintenance to more areas, including schools, hospitals, and utilities. 2. Introduce AI in Public Safety and Health • Emergency Response: Use AI to analyze real-time data in emergency situations (e.g., typhoon response) to coordinate resources more effectively. • Healthcare Data Analytics: Partner with healthcare providers to use AI for population health management, predicting outbreaks, and improving patient care. 3. Create AI Training and Certification for Government Employees • Provide ongoing training programs for government employees, focusing on using AI tools and interpreting AI-driven insights. • Develop AI certification programs to create an internal talent pool capable of maintaining and improving AI systems. 4. Formalize AI Ethics and Accountability Standards • Refine the governance framework to include an ethics board that reviews and oversees the use of AI across government functions. • Implement accountability standards and ensure AI systems comply with federal regulations and local laws.
Phase 4: Integration into Core Government Operations
Objective: Make AI an integral part of the government’s strategic operations and decision-making processes. 1. Data-Driven Decision Support Systems • Use AI-driven data analytics to support strategic decisions in areas like budgeting, urban planning, and environmental conservation. • Implement AI in predictive policy modeling to understand potential outcomes of proposed policies, such as tax changes or environmental regulations. 2. Enhance Public Services through AI Personalization • Use AI to personalize citizen interactions, such as tailoring services and resources to individual needs (e.g., targeted job placement support, customized health resources). • Expand AI-driven solutions to improve resident experiences, such as smart city initiatives (e.g., optimized public transport). 3. Continuous Monitoring and Improvement • Set up a monitoring body to continuously review and improve AI applications, ensuring they remain effective and relevant. • Regularly audit AI systems for fairness, accuracy, and accountability, with periodic public reports to maintain transparency.
Phase 5: AI-Driven Innovation and Self-Sustaining Ecosystem
Objective: Establish Guam as a model for AI in government, fostering innovation and local expertise. 1. Create an AI Innovation Lab for Government Projects • Establish an AI innovation lab to continuously explore new AI applications, train employees, and pilot experimental projects. • Partner with educational institutions to encourage AI research that aligns with Guam’s government needs. 2. Foster Public-Private AI Collaborations • Partner with local businesses and tech companies to co-develop AI solutions and create local economic growth. • Encourage start-ups by providing funding or mentorship for AI innovations that align with public service needs. 3. Establish Guam as a Regional Leader in AI for Public Governance • Showcase Guam’s AI initiatives at national and international levels as a case study for effective, ethical AI in government. • Host workshops and conferences to foster knowledge-sharing and solidify Guam’s reputation as an innovator in AI governance.
Conclusion
This phased approach enables Guam’s government to responsibly integrate AI, ensuring that each step brings clear benefits while managing risks and maintaining public trust.
Mahalo
SIGNATURE:
Clifford "RAY" Hackett www.rayis.me RESUME: www.rayis.me/resume
I founded www.adapt.org in 1980 it now has over 50 million members.
$500 of material=World’s fastest hydrofoil sailboat. http://sunrun.biz
Eviction injunction
Clifford Ray Hackett
CRH2123@iCloud.com
671-787-2345
Guam Housing and Urban Renewal Authority (GHURA)
Subject: Request for Injunction to Halt Eviction
Dear Sir/Madam,
I am writing to request an injunction to halt eviction proceedings initiated by the Guam Housing and Urban Renewal Authority (GHURA).
Background:
I have been a tenant at the above address for years and recently received an eviction notice I believe this eviction to be unlawful and unjust.
Grounds for Injunction:
1. Violation of Lease Agreement:
The eviction notice violates lease terms.
2. Lack of Proper Notice:
Guam law requires days’ notice for evictions. I received NONE, which is insufficient.
3. Retaliatory Eviction:
The eviction may be retaliatory following my recent complaints Retaliatory evictions are prohibited.
4. Potential Discrimination:
I am concerned that this eviction may be discriminatory based on [race, gender, disability, etc.], which would violate the Fair Housing Act and other applicable laws.
5. Hardship:
The eviction would impose severe hardship on me.
Request:
I request the immediate cessation of eviction proceedings and an injunction to prevent further actions until a full investigation and hearing are conducted.
Please confirm receipt of this letter and provide a prompt response. I am prepared to provide additional documentation to support this request.
Sincerely,
Clifford Ray Hackett
USCIS Tip Form | USCIS
Case 6735
I am a party in that case how may I get access to see the court docket Sent from my iPhone
Can’t Open Page
Improved CRAWLER 241112
import requests import time import random from bs4 import BeautifulSoup
# Global variables URL_list = [] URL_parent_Category = {} categoryLevel = {} history = {} final_URLs = {} parsed = 0 n_URLs = 1 max_URLs = 5000
# Base URLs URL_base1 = "https://mathworld.wolfram.com/topics/" # Directory pages URL_base2 = "https://mathworld.wolfram.com/" # Final pages
# Seed URL and Category seed_URL = "https://mathworld.wolfram.com/topics/ProbabilityandStatistics.html" seed_category = "Probability and Statistics" categoryLevel[seed_category] = 1
# Validate function to filter unwanted links def validate(string): ignore_list = ['about/', 'classroom/', 'contact/', 'whatsnew/', 'letters/'] return len(string) <= 60 and all(i not in string for i in ignore_list) and ‘topics’ not in string
# Request with retries and random user-agent def get_request(url, retries=3, timeout=5): headers = {‘User-Agent’: random.choice([
'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 Chrome/58.0 Safari/537.36',
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 Safari/605.1.15'
])} for attempt in range(retries): try: response = requests.get(url, timeout=timeout, headers=headers) return response except requests.RequestException: time.sleep(2 + random.uniform(0, 1.5)) # Randomized sleep return None
# Update URL and category lists def update_lists(new_URL, new_category, parent_category, file): URL_parent_Category[new_URL] = new_category categoryLevel[new_category] = categoryLevel[parent_category] + 1 level = str(categoryLevel[new_category]) file.write(f"{level}\t{new_category}\t{parent_category}\n") file.flush()
# Crawling phase def crawl(seed_URL, seed_category, file1, file2): global parsed, n_URLs URL_list.append(seed_URL) URL_parent_Category[seed_URL] = seed_category while parsed < min(max_URLs, n_URLs): URL = URL_list[parsed] parent_category = URL_parent_Category[URL] level = categoryLevel[parent_category] time.sleep(2 + random.uniform(0, 1.5)) parsed += 1 if URL in history: file1.write(f"{URL}\tDuplicate\t{parent_category}\t{level}\n") continue resp = get_request(URL) history[URL] = resp.status_code if resp else "Error" if not resp or resp.status_code != 200: file1.write(f"{URL}\tError:{resp.status_code if resp else ‘Timeout’}\t{parent_category}\t{level}\n") continue file1.write(f"{URL}\tParsed\t{parent_category}\t{level}\n") soup = BeautifulSoup(resp.text, ‘html.parser’) for link in soup.find_all(‘a’, href=True): href = link['href'] new_category = link.text.strip() if ‘topics/’ in href: new_URL = URL_base1 + href.split("/topics/")[1] URL_list.append(new_URL) update_lists(new_URL, new_category, parent_category, file2) file1.write(f"{new_URL}\tQueued\t{new_category}\t{level+1}\n") n_URLs += 1 elif validate(href): new_URL = URL_base2 + href.split("/")[1] final_URLs[new_URL] = (new_category, parent_category, level+1) update_lists(new_URL, new_category, parent_category, file2) file1.write(f"{new_URL}\tEndNode\t{new_category}\t{level+1}\n") print(f"Crawling completed. Parsed {parsed} URLs out of {n_URLs}.")
# Content extraction phase def extract_content(begin, end): with open("list_final_URLs.txt", "r", encoding="utf-8") as file_input, \ open(f"crawl_final_{begin}_{end}.txt", "w", encoding="utf-8") as file_output: for line in file_input: count, URL, category = line.split("\t")[:3] if begin <= int(count) <= end: resp = get_request(URL) if resp and resp.status_code == 200: page = resp.text.replace(‘\n’, ‘ ‘) file_output.write(f"{URL}\t{category}\t~{page}\n") else: print(f"Error fetching {URL}: {resp.status_code if resp else ‘Timeout’}") print(f"Content extraction from {begin} to {end} completed.")
# Main execution if __name__ == "__main__": with open("crawl_log.txt", "w", encoding="utf-8") as file1, open("crawl_categories.txt", "w", encoding="utf-8") as file2: crawl(seed_URL, seed_category, file1, file2) extract_content(begin=1, end=500) print("All tasks completed successfully.")
Mahalo
SIGNATURE:
Clifford "RAY" Hackett www.rayis.me RESUME: www.rayis.me/resume
I founded www.adapt.org in 1980 it now has over 50 million members.
$500 of material=World’s fastest hydrofoil sailboat. http://sunrun.biz
Combined crawler and extraction
import requests
import time
import random
from bs4 import BeautifulSoup
Global variables
URL_list = []
URL_parent_Category = {}
categoryLevel = {}
history = {}
final_URLs = {}
parsed = 0
n_URLs = 1
max_URLs = 5000
URLs base
URL_base1 = "https://mathworld.wolfram.com/topics/" # for directory pages (root)
URL_base2 = "https://mathworld.wolfram.com/" # for final pages
Seed URL and Category
seed_URL = "https://mathworld.wolfram.com/topics/ProbabilityandStatistics.html"
seed_category = "Probability and Statistics"
categoryLevel[seed_category] = 1 # Start category level
Proxy setup (optional, uncomment and modify if needed)
proxies = {
‘http’: ‘http://username:password@proxy_url:proxy_port‘,
‘https’: ‘https://username:password@proxy_url:proxy_port‘,
}
Validate function to filter unwanted links
def validate(string):
Ignore = ['about/', 'classroom/', 'contact/', 'whatsnew/', 'letters/']
return len(string) <= 60 and string not in Ignore and ‘topics’ not in string
Request with retries and custom headers
def get_request_with_retries(url, retries=3, timeout=5):
headers = {
‘User-Agent’: random.choice([
'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36',
'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:89.0) Gecko/20100101 Firefox/89.0',
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.0.3 Safari/605.1.15'
])
}
for i in range(retries): try: # Uncomment `proxies` argument if using proxy # resp = requests.get(url, timeout=timeout, proxies=proxies, headers=headers) resp = requests.get(url, timeout=timeout, headers=headers) return resp except requests.exceptions.RequestException as e: print(f"Attempt {i+1} failed for URL: {url}. Error: {e}") time.sleep(2 + random.uniform(0, 1.5)) # Randomized sleep return None
Update lists of URLs and categories
def update_lists(new_URL, new_category, parent_category, file):
URL_parent_Category[new_URL] = new_category
categoryLevel[new_category] = 1 + categoryLevel[parent_category]
level = str(categoryLevel[new_category])
file.write(f"{level}\t{new_category}\t{parent_category}\n")
file.flush()
Crawling phase (Step 1)
def crawl(seed_URL, seed_category, file1, file2):
global parsed, n_URLs
URL_list.append(seed_URL)
URL_parent_Category[seed_URL] = seed_category
categoryLevel[seed_category] = 1
while parsed < min(max_URLs, n_URLs): URL = URL_list[parsed] parent_category = URL_parent_Category[URL] level = categoryLevel[parent_category] time.sleep(2 + random.uniform(0, 1.5)) # Slow down crawling parsed += 1 if URL in history: print(f"Duplicate: {URL}") file1.write(f"{URL}\tDuplicate\t{parent_category}\t{level}\n") else: print(f"Parsing: {parsed}/{n_URLs}: {URL}") resp = get_request_with_retries(URL) if resp: history[URL] = resp.status_code else: history[URL] = "Error" if not resp or resp.status_code != 200: reason = resp.reason if resp else "Timeout" print(f"Failed: {URL} - {reason}") file1.write(f"{URL}\tError:{resp.status_code if resp else 'Timeout'}\t{reason}\t{parent_category}\t{level}\n") else: file1.write(f"{URL}\tParsed\t{parent_category}\t{level}\n") page = resp.text.replace('\n', ' ') soup = BeautifulSoup(page, 'html.parser') # Scrape intermediate directories (Type-1) for link in soup.find_all('a', href=True): href = link['href'] if 'topics/' in href: new_URL = URL_base1 + href.split("/topics/")[1] new_category = link.text.strip() URL_list.append(new_URL) update_lists(new_URL, new_category, parent_category, file2) file1.write(f"{new_URL}\tQueued\t{new_category}\t{level+1}\n") n_URLs += 1 # Scrape final pages (Type-2) for link in soup.find_all('a', href=True): href = link['href'] if validate(href): new_URL = URL_base2 + href.split("/")[1] new_category = link.text.strip() final_URLs[new_URL] = (new_category, parent_category, level+1) update_lists(new_URL, new_category, parent_category, file2) file1.write(f"{new_URL}\tEndNode\t{new_category}\t{level+1}\n") print(f"Crawling completed. Parsed {parsed} URLs out of {n_URLs}.")
Content extraction phase (Step 2)
def extract_content(begin, end):
with open("list_final_URLs.txt", "r", encoding="utf-8") as file_input:
Lines = file_input.readlines()
with open(f"crawl_final_{begin}_{end}.txt", "w", encoding="utf-8") as file_output: for line in Lines: count, URL, category = line.split("\t")[:3] if int(count) >= begin and int(count) <= end: print(f"Page {count}: {URL}") resp = get_request_with_retries(URL) if resp and resp.status_code == 200: page = resp.text.replace('\n', ' ') file_output.write(f"{URL}\t{category}\t~{page}\n") else: print(f"Error fetching {URL}: {resp.status_code if resp else 'Timeout'}") print(f"Content extraction from {begin} to {end} completed.")
Main execution
if name == "main":
# Open files for logging
with open("crawl_log.txt", "w", encoding="utf-8") as file1, open("crawl_categories.txt", "w", encoding="utf-8") as file2:
crawl(seed_URL="https://mathworld.wolfram.com/topics/ProbabilityandStatistics.html", seed_category="Probability and Statistics", file1=file1, file2=file2)
# Extract content from final URLs (Modify begin and end as needed) extract_content(begin=1, end=500) # Completion message print("All tasks completed successfully.")
Mahalo
SIGNATURE:
Clifford "RAY" Hackett www.rayis.me RESUME: www.rayis.me/resume
I founded www.adapt.org in 1980 it now has over 50 million members.
$500 of material=World’s fastest hydrofoil sailboat. http://sunrun.biz