Free — no account required

A better way to read patents

Enter a patent number like you would on Google Patents — but get claim dependency trees, reference numeral mapping, figure bounding boxes, and multi-term search.

Try on a real patent Scroll to learn more
Patent Reader
US11423567B2

Methods and systems for detecting head location and orientation using sensor data

Abstract

A method for detecting head location and orientation using a sensor array 102 and processing unit 104. The system generates a depth map and applies machine learning models to determine position and orientation in real time.

Description

[0001]

The present invention relates to methods and systems for detecting head location and orientation using sensor data.

[0002]

In various applications, such as virtual reality, augmented reality, and human-computer interaction, it is desirable to accurately track the position and orientation of a user's head.

[0003]

FIG. 1 illustrates an exemplary system 100 for head tracking. The system 100 includes a sensor array 102 and a processing unit 104 connected via a data bus 106.

[0004]

The sensor array 102 comprises one or more depth sensors configured to capture three-dimensional point cloud data of a scene. In one embodiment, the sensor array 102 includes a structured-light depth camera operating at a frame rate of at least 30 Hz.

[0005]

The processing unit 104 receives the point cloud data via the data bus 106 and generates a depth map representing the spatial distribution of surfaces in the captured scene. The depth map is stored in a frame buffer for subsequent processing.

[0006]

FIG. 2 illustrates a convolutional neural network architecture used by the processing unit 104 to determine head position and orientation. The network receives the depth map as input and outputs a six-degree-of-freedom pose estimate comprising three translational and three rotational components.

[0007]

Prior to inference, the system performs a calibration procedure in which the sensor array 102 captures a series of reference frames under controlled lighting conditions. These reference frames are used to compute intrinsic and extrinsic camera parameters stored in a calibration table.

Claims

1.

A method for detecting head location and orientation, the method comprising: receiving, by a processor, image data from a sensor array 102; generating a depth map...

2.

The method of claim 1, wherein the sensor comprises a depth camera configured to capture infrared structured light patterns.

3.

The method of claim 1, further comprising calibrating the sensor based on ambient lighting conditions.

Features

Everything you need to read a patent

A structured viewer built for patent professionals. Free, no login required.

Structured viewing

The full patent, structured for reading

Abstract, description, claims, and figures — all in one continuous layout. Reference numerals are clickable throughout the text: click one to highlight every occurrence and see it on the figures. Figure references like "FIG. 3" jump to the drawing. Claim cross-references like "the method of claim 1" link back to the parent claim. Paragraph numbers are preserved so you can cite exactly where something appears.

  • Reference numerals are interactive — click to cross-highlight across the entire document
  • Figure references (FIG. 1, FIG. 2A) jump directly to the drawing
  • Claim cross-references link back to parent claims
  • Paragraph numbers preserved for precise citation
Patent Reader

A method for detecting head location and orientation using a sensor array 102 and processing unit 104. The system generates a depth map and applies machine learning models.

[0003]

FIG. 1 illustrates an exemplary system 100 for head tracking. The system 100 includes a sensor array 102 and a processing unit 104 connected via a data bus 106.

1.

A method for detecting head location, the method comprising: receiving, by a processor, image data from a sensor array 102; generating a depth map…

2.

The method of claim 1, wherein the sensor comprises a depth camera configured to capture infrared structured light patterns.

Reference numerals

Every reference numeral, extracted and indexed

The reader extracts every reference numeral from the specification along with its description and how many times it appears. The sidebar gives you a browsable list — click any numeral to highlight it everywhere in the text and on the figures. Expand a numeral to see each occurrence with its surrounding context, so you can quickly understand how an element is used across the specification without scrolling through paragraphs.

  • Full list of numerals with descriptions and occurrence counts
  • Click to cross-highlight in text, figures, and sidebar simultaneously
  • Expand any numeral to browse each occurrence with surrounding context
  • Filter and search within the reference numeral list
Reference Numerals
[0003]

The system 100 includes a sensor array 102 and a processing unit 104.

[0004]

The sensor array 102 captures depth data. The processing unit 104 receives data via bus 106.

[0005]

A memory module 108 stores the depth map generated by the sensor 102.

Numerals

100 system ×8
102 sensor ×6
104 processor ×4
106 data bus ×3
108 memory ×2
Figure bounding boxes

Reference numerals overlaid on the drawings

Reference numerals are detected on patent figures and drawn as clickable bounding boxes directly on the image. Select a numeral in the text and see it highlighted on the drawing. Or click a box on the figure to find that element in the specification. This is especially useful for mechanical and electrical patents where the relationship between the written description and the figures is critical to understanding the invention.

  • Clickable boxes drawn directly on patent figures
  • Bidirectional linking — text to figure and figure to text
  • Hover to preview the numeral's description without leaving the figure
  • Works with multi-sheet figures and detailed drawings
FIG. 1
102
104
108
106
102 — sensor array 6 occurrences
104 — processing unit 4 occurrences
Multi-term search

Search for multiple terms at once, each color-coded

Add multiple search terms and each gets its own highlight color. Browse matches by section — see how many times each term appears in the abstract, description, and claims. Click any result to jump to that occurrence. Toggle case-sensitive or whole-word matching per term. This is useful when you're tracking how specific claim terms appear across a specification, or locating where a prior art reference discusses particular features.

  • Each term gets a distinct highlight color for easy visual scanning
  • Match counts broken out by section (abstract, description, claims)
  • Click any match to jump directly to it in the text
  • Per-term toggles for case sensitivity and whole-word matching
Search
sensor depth map processor
[0003]

The sensor array captures data and generates a depth map using the processor. Multiple sensor elements are arranged in a grid pattern.

[0004]

The processor computes orientation from the depth map. Each sensor reading is filtered before being passed to the processor.

sensor ×14
depth map ×8
processor ×6
Try Patent Reader

Free, no account required

How It Works

Two steps to read any patent

No signup, no setup. Just enter a patent number and start reading.

01

Enter a patent number

Paste any US, EP, or other patent or publication number. The reader fetches and structures the full patent automatically.

02

Explore the patent

Read claims with dependency trees, browse figures with reference numeral bounding boxes, navigate citations, and search across the full text.

Coverage

Any patent, any jurisdiction

Enter a publication number from any of these offices. The reader fetches and structures the full patent automatically.

US United States Patent and Trademark Office (USPTO)
EP European Patent Office (EPO)
WO World Intellectual Property Organization (WIPO/PCT)
CN China National Intellectual Property Administration (CNIPA)
JP Japan Patent Office (JPO)
KR Korean Intellectual Property Office (KIPO)
CA Canadian Intellectual Property Office (CIPO)
AU IP Australia
IN Indian Patent Office (IPO)
GB UK Intellectual Property Office (UKIPO)
DE German Patent and Trade Mark Office (DPMA)
FR French National Institute of Industrial Property (INPI)

And many more jurisdictions worldwide.

Pairs naturally with the OA Response Agent

Read the cited prior art in Patent Reader, then analyze the rejection with the AI agent. The reader helps you understand the references; the agent helps you respond.

Learn about the OA Response Agent

FAQ

Patent Reader FAQ

Is Patent Reader really free?

Yes, completely free with no account required. Enter any patent number and get structured viewing, claim dependency trees, reference numeral mapping, figure bounding boxes, citation browsing, and multi-term search. No hidden paywalls.

How is this different from Google Patents?

Google Patents shows you the raw document. Patent Reader structures it for prosecution work — claim dependency trees, clickable reference numerals cross-linked to figures, bounding boxes overlaid on drawings, and multi-term color-coded search. It's the viewer you wish Google Patents had.

Where does Patent Reader get its data?

Patent data is sourced from official patent office publications. You get the full specification, claims, figures, citations, legal events, and family information — structured into claim trees, reference numeral extraction, and figure mapping.

What patent offices are supported?

US (USPTO), European (EPO), Chinese (CNIPA), Japanese (JPO), Korean (KIPO), PCT (WIPO), and many more. Enter the publication number in any standard format.

Can I link to a specific patent view?

Yes. The URL updates as you navigate, so you can share a direct link to any patent. Useful for sharing references with colleagues or bookmarking patents you're working with.

Try Patent Reader

Free, no account required